Test Report: KVM_Linux_crio 19576

                    
                      2e9b50ac88536491e648f1503809a6b59d99d481:2024-09-06:36104
                    
                

Test fail (32/312)

Order failed test Duration
33 TestAddons/parallel/Registry 74.13
34 TestAddons/parallel/Ingress 150.29
36 TestAddons/parallel/MetricsServer 316.19
111 TestFunctional/parallel/License 0.11
164 TestMultiControlPlane/serial/StopSecondaryNode 141.97
166 TestMultiControlPlane/serial/RestartSecondaryNode 56.39
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 814.04
169 TestMultiControlPlane/serial/DeleteSecondaryNode 18.14
171 TestMultiControlPlane/serial/StopCluster 173.01
231 TestMultiNode/serial/RestartKeepsNodes 326.34
233 TestMultiNode/serial/StopMultiNode 141.29
240 TestPreload 162.29
248 TestKubernetesUpgrade 435.01
284 TestPause/serial/SecondStartNoReconfiguration 55.56
316 TestStartStop/group/old-k8s-version/serial/FirstStart 289.65
340 TestStartStop/group/no-preload/serial/Stop 139.08
342 TestStartStop/group/embed-certs/serial/Stop 139.09
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.05
346 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
347 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.39
350 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 91.6
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/SecondStart 728.14
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.21
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.18
359 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.21
360 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.36
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 429.15
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 455.61
363 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 315.43
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 144.51
x
+
TestAddons/parallel/Registry (74.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.21126ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003344255s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00471846s
addons_test.go:342: (dbg) Run:  kubectl --context addons-959832 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-959832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-959832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.089974498s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-959832 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 ip
2024/09/06 18:40:57 [DEBUG] GET http://192.168.39.98:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-959832 -n addons-959832
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-959832 logs -n 25: (1.385983256s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-726386                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-726386                                                                     | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -o=json --download-only                                                                     | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-693029                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-693029                                                                     | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-726386                                                                     | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-693029                                                                     | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-071210 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | binary-mirror-071210                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42457                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-071210                                                                     | binary-mirror-071210 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-959832 --wait=true                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:39 UTC | 06 Sep 24 18:39 UTC |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-959832 ssh curl -s                                                                   | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-959832 addons                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-959832 addons                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-959832 ssh cat                                                                       | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | /opt/local-path-provisioner/pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-959832 ip                                                                            | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:30.440394   13823 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:30.440643   13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:30.440652   13823 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:30.440656   13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:30.440824   13823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:29:30.441460   13823 out.go:352] Setting JSON to false
	I0906 18:29:30.442255   13823 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":719,"bootTime":1725646651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:30.442312   13823 start.go:139] virtualization: kvm guest
	I0906 18:29:30.444228   13823 out.go:177] * [addons-959832] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:29:30.445334   13823 notify.go:220] Checking for updates...
	I0906 18:29:30.445342   13823 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:29:30.446652   13823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:30.448060   13823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:29:30.449528   13823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:30.450779   13823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:29:30.451986   13823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:29:30.453700   13823 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:30.485465   13823 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 18:29:30.486701   13823 start.go:297] selected driver: kvm2
	I0906 18:29:30.486713   13823 start.go:901] validating driver "kvm2" against <nil>
	I0906 18:29:30.486727   13823 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:29:30.487397   13823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:30.487478   13823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 18:29:30.502694   13823 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 18:29:30.502738   13823 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:30.502931   13823 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:29:30.502959   13823 cni.go:84] Creating CNI manager for ""
	I0906 18:29:30.502966   13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:29:30.502978   13823 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:30.503026   13823 start.go:340] cluster config:
	{Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0906 18:29:30.503117   13823 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:30.504979   13823 out.go:177] * Starting "addons-959832" primary control-plane node in "addons-959832" cluster
	I0906 18:29:30.506126   13823 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:29:30.506168   13823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 18:29:30.506178   13823 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:30.506272   13823 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 18:29:30.506286   13823 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 18:29:30.506559   13823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json ...
	I0906 18:29:30.506577   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json: {Name:mkb043cbbb2997cf908fb60acd39795871d65137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:29:30.506698   13823 start.go:360] acquireMachinesLock for addons-959832: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:29:30.506741   13823 start.go:364] duration metric: took 31.601µs to acquireMachinesLock for "addons-959832"
	I0906 18:29:30.506759   13823 start.go:93] Provisioning new machine with config: &{Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:29:30.506820   13823 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 18:29:30.508432   13823 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 18:29:30.508550   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:29:30.508587   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:29:30.522987   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34483
	I0906 18:29:30.523384   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:29:30.523869   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:29:30.523890   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:29:30.524169   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:29:30.524345   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:30.524450   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:30.524591   13823 start.go:159] libmachine.API.Create for "addons-959832" (driver="kvm2")
	I0906 18:29:30.524624   13823 client.go:168] LocalClient.Create starting
	I0906 18:29:30.524668   13823 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 18:29:30.595679   13823 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 18:29:30.708441   13823 main.go:141] libmachine: Running pre-create checks...
	I0906 18:29:30.708464   13823 main.go:141] libmachine: (addons-959832) Calling .PreCreateCheck
	I0906 18:29:30.708957   13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
	I0906 18:29:30.709397   13823 main.go:141] libmachine: Creating machine...
	I0906 18:29:30.709410   13823 main.go:141] libmachine: (addons-959832) Calling .Create
	I0906 18:29:30.709556   13823 main.go:141] libmachine: (addons-959832) Creating KVM machine...
	I0906 18:29:30.710795   13823 main.go:141] libmachine: (addons-959832) DBG | found existing default KVM network
	I0906 18:29:30.711508   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:30.711378   13845 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0906 18:29:30.711570   13823 main.go:141] libmachine: (addons-959832) DBG | created network xml: 
	I0906 18:29:30.711607   13823 main.go:141] libmachine: (addons-959832) DBG | <network>
	I0906 18:29:30.711624   13823 main.go:141] libmachine: (addons-959832) DBG |   <name>mk-addons-959832</name>
	I0906 18:29:30.711646   13823 main.go:141] libmachine: (addons-959832) DBG |   <dns enable='no'/>
	I0906 18:29:30.711654   13823 main.go:141] libmachine: (addons-959832) DBG |   
	I0906 18:29:30.711661   13823 main.go:141] libmachine: (addons-959832) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0906 18:29:30.711668   13823 main.go:141] libmachine: (addons-959832) DBG |     <dhcp>
	I0906 18:29:30.711673   13823 main.go:141] libmachine: (addons-959832) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0906 18:29:30.711684   13823 main.go:141] libmachine: (addons-959832) DBG |     </dhcp>
	I0906 18:29:30.711691   13823 main.go:141] libmachine: (addons-959832) DBG |   </ip>
	I0906 18:29:30.711698   13823 main.go:141] libmachine: (addons-959832) DBG |   
	I0906 18:29:30.711706   13823 main.go:141] libmachine: (addons-959832) DBG | </network>
	I0906 18:29:30.711714   13823 main.go:141] libmachine: (addons-959832) DBG | 
	I0906 18:29:30.716914   13823 main.go:141] libmachine: (addons-959832) DBG | trying to create private KVM network mk-addons-959832 192.168.39.0/24...
	I0906 18:29:30.784502   13823 main.go:141] libmachine: (addons-959832) DBG | private KVM network mk-addons-959832 192.168.39.0/24 created
	I0906 18:29:30.784548   13823 main.go:141] libmachine: (addons-959832) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 ...
	I0906 18:29:30.784580   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:30.784495   13845 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:30.784596   13823 main.go:141] libmachine: (addons-959832) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:29:30.784621   13823 main.go:141] libmachine: (addons-959832) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:29:31.031605   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.031496   13845 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa...
	I0906 18:29:31.150285   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.150157   13845 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/addons-959832.rawdisk...
	I0906 18:29:31.150312   13823 main.go:141] libmachine: (addons-959832) DBG | Writing magic tar header
	I0906 18:29:31.150322   13823 main.go:141] libmachine: (addons-959832) DBG | Writing SSH key tar header
	I0906 18:29:31.150329   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.150306   13845 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 ...
	I0906 18:29:31.150514   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832
	I0906 18:29:31.150551   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 18:29:31.150582   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 (perms=drwx------)
	I0906 18:29:31.150604   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:29:31.150630   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 18:29:31.150652   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 18:29:31.150664   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:31.150681   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 18:29:31.150694   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:29:31.150709   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:29:31.150726   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:29:31.150738   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home
	I0906 18:29:31.150755   13823 main.go:141] libmachine: (addons-959832) DBG | Skipping /home - not owner
	I0906 18:29:31.150771   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:29:31.150781   13823 main.go:141] libmachine: (addons-959832) Creating domain...
	I0906 18:29:31.151641   13823 main.go:141] libmachine: (addons-959832) define libvirt domain using xml: 
	I0906 18:29:31.151668   13823 main.go:141] libmachine: (addons-959832) <domain type='kvm'>
	I0906 18:29:31.151680   13823 main.go:141] libmachine: (addons-959832)   <name>addons-959832</name>
	I0906 18:29:31.151693   13823 main.go:141] libmachine: (addons-959832)   <memory unit='MiB'>4000</memory>
	I0906 18:29:31.151703   13823 main.go:141] libmachine: (addons-959832)   <vcpu>2</vcpu>
	I0906 18:29:31.151718   13823 main.go:141] libmachine: (addons-959832)   <features>
	I0906 18:29:31.151723   13823 main.go:141] libmachine: (addons-959832)     <acpi/>
	I0906 18:29:31.151727   13823 main.go:141] libmachine: (addons-959832)     <apic/>
	I0906 18:29:31.151736   13823 main.go:141] libmachine: (addons-959832)     <pae/>
	I0906 18:29:31.151741   13823 main.go:141] libmachine: (addons-959832)     
	I0906 18:29:31.151747   13823 main.go:141] libmachine: (addons-959832)   </features>
	I0906 18:29:31.151754   13823 main.go:141] libmachine: (addons-959832)   <cpu mode='host-passthrough'>
	I0906 18:29:31.151759   13823 main.go:141] libmachine: (addons-959832)   
	I0906 18:29:31.151772   13823 main.go:141] libmachine: (addons-959832)   </cpu>
	I0906 18:29:31.151779   13823 main.go:141] libmachine: (addons-959832)   <os>
	I0906 18:29:31.151788   13823 main.go:141] libmachine: (addons-959832)     <type>hvm</type>
	I0906 18:29:31.151795   13823 main.go:141] libmachine: (addons-959832)     <boot dev='cdrom'/>
	I0906 18:29:31.151801   13823 main.go:141] libmachine: (addons-959832)     <boot dev='hd'/>
	I0906 18:29:31.151808   13823 main.go:141] libmachine: (addons-959832)     <bootmenu enable='no'/>
	I0906 18:29:31.151812   13823 main.go:141] libmachine: (addons-959832)   </os>
	I0906 18:29:31.151818   13823 main.go:141] libmachine: (addons-959832)   <devices>
	I0906 18:29:31.151825   13823 main.go:141] libmachine: (addons-959832)     <disk type='file' device='cdrom'>
	I0906 18:29:31.151834   13823 main.go:141] libmachine: (addons-959832)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/boot2docker.iso'/>
	I0906 18:29:31.151841   13823 main.go:141] libmachine: (addons-959832)       <target dev='hdc' bus='scsi'/>
	I0906 18:29:31.151847   13823 main.go:141] libmachine: (addons-959832)       <readonly/>
	I0906 18:29:31.151853   13823 main.go:141] libmachine: (addons-959832)     </disk>
	I0906 18:29:31.151859   13823 main.go:141] libmachine: (addons-959832)     <disk type='file' device='disk'>
	I0906 18:29:31.151867   13823 main.go:141] libmachine: (addons-959832)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:29:31.151878   13823 main.go:141] libmachine: (addons-959832)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/addons-959832.rawdisk'/>
	I0906 18:29:31.151886   13823 main.go:141] libmachine: (addons-959832)       <target dev='hda' bus='virtio'/>
	I0906 18:29:31.151894   13823 main.go:141] libmachine: (addons-959832)     </disk>
	I0906 18:29:31.151899   13823 main.go:141] libmachine: (addons-959832)     <interface type='network'>
	I0906 18:29:31.151908   13823 main.go:141] libmachine: (addons-959832)       <source network='mk-addons-959832'/>
	I0906 18:29:31.151915   13823 main.go:141] libmachine: (addons-959832)       <model type='virtio'/>
	I0906 18:29:31.151923   13823 main.go:141] libmachine: (addons-959832)     </interface>
	I0906 18:29:31.151931   13823 main.go:141] libmachine: (addons-959832)     <interface type='network'>
	I0906 18:29:31.151957   13823 main.go:141] libmachine: (addons-959832)       <source network='default'/>
	I0906 18:29:31.151984   13823 main.go:141] libmachine: (addons-959832)       <model type='virtio'/>
	I0906 18:29:31.151993   13823 main.go:141] libmachine: (addons-959832)     </interface>
	I0906 18:29:31.152008   13823 main.go:141] libmachine: (addons-959832)     <serial type='pty'>
	I0906 18:29:31.152028   13823 main.go:141] libmachine: (addons-959832)       <target port='0'/>
	I0906 18:29:31.152046   13823 main.go:141] libmachine: (addons-959832)     </serial>
	I0906 18:29:31.152059   13823 main.go:141] libmachine: (addons-959832)     <console type='pty'>
	I0906 18:29:31.152070   13823 main.go:141] libmachine: (addons-959832)       <target type='serial' port='0'/>
	I0906 18:29:31.152078   13823 main.go:141] libmachine: (addons-959832)     </console>
	I0906 18:29:31.152086   13823 main.go:141] libmachine: (addons-959832)     <rng model='virtio'>
	I0906 18:29:31.152095   13823 main.go:141] libmachine: (addons-959832)       <backend model='random'>/dev/random</backend>
	I0906 18:29:31.152103   13823 main.go:141] libmachine: (addons-959832)     </rng>
	I0906 18:29:31.152113   13823 main.go:141] libmachine: (addons-959832)     
	I0906 18:29:31.152126   13823 main.go:141] libmachine: (addons-959832)     
	I0906 18:29:31.152138   13823 main.go:141] libmachine: (addons-959832)   </devices>
	I0906 18:29:31.152148   13823 main.go:141] libmachine: (addons-959832) </domain>
	I0906 18:29:31.152161   13823 main.go:141] libmachine: (addons-959832) 
	I0906 18:29:31.158081   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:b5:f5:6a in network default
	I0906 18:29:31.158542   13823 main.go:141] libmachine: (addons-959832) Ensuring networks are active...
	I0906 18:29:31.158562   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:31.159097   13823 main.go:141] libmachine: (addons-959832) Ensuring network default is active
	I0906 18:29:31.159345   13823 main.go:141] libmachine: (addons-959832) Ensuring network mk-addons-959832 is active
	I0906 18:29:31.159767   13823 main.go:141] libmachine: (addons-959832) Getting domain xml...
	I0906 18:29:31.160314   13823 main.go:141] libmachine: (addons-959832) Creating domain...
	I0906 18:29:32.546282   13823 main.go:141] libmachine: (addons-959832) Waiting to get IP...
	I0906 18:29:32.547051   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:32.547580   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:32.547618   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:32.547518   13845 retry.go:31] will retry after 234.819193ms: waiting for machine to come up
	I0906 18:29:32.783988   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:32.784398   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:32.784420   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:32.784350   13845 retry.go:31] will retry after 374.097016ms: waiting for machine to come up
	I0906 18:29:33.159641   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:33.160076   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:33.160104   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:33.160024   13845 retry.go:31] will retry after 398.438198ms: waiting for machine to come up
	I0906 18:29:33.559453   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:33.559850   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:33.559879   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:33.559800   13845 retry.go:31] will retry after 513.667683ms: waiting for machine to come up
	I0906 18:29:34.075531   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:34.075976   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:34.076002   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:34.075937   13845 retry.go:31] will retry after 542.640322ms: waiting for machine to come up
	I0906 18:29:34.620767   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:34.621139   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:34.621164   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:34.621100   13845 retry.go:31] will retry after 952.553494ms: waiting for machine to come up
	I0906 18:29:35.575061   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:35.575519   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:35.575550   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:35.575475   13845 retry.go:31] will retry after 761.897484ms: waiting for machine to come up
	I0906 18:29:36.339380   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:36.339747   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:36.339775   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:36.339696   13845 retry.go:31] will retry after 1.058974587s: waiting for machine to come up
	I0906 18:29:37.399861   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:37.400184   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:37.400204   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:37.400146   13845 retry.go:31] will retry after 1.319275872s: waiting for machine to come up
	I0906 18:29:38.720600   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:38.721039   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:38.721065   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:38.720974   13845 retry.go:31] will retry after 1.544734383s: waiting for machine to come up
	I0906 18:29:40.267964   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:40.268338   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:40.268365   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:40.268303   13845 retry.go:31] will retry after 2.517498837s: waiting for machine to come up
	I0906 18:29:42.790192   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:42.790620   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:42.790646   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:42.790574   13845 retry.go:31] will retry after 2.829630462s: waiting for machine to come up
	I0906 18:29:45.621992   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:45.622542   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:45.622614   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:45.622535   13845 retry.go:31] will retry after 3.555249592s: waiting for machine to come up
	I0906 18:29:49.181782   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:49.182176   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:49.182199   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:49.182134   13845 retry.go:31] will retry after 4.155059883s: waiting for machine to come up
	I0906 18:29:53.340058   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:53.340648   13823 main.go:141] libmachine: (addons-959832) Found IP for machine: 192.168.39.98
	I0906 18:29:53.340677   13823 main.go:141] libmachine: (addons-959832) Reserving static IP address...
	I0906 18:29:53.340693   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has current primary IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:53.341097   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find host DHCP lease matching {name: "addons-959832", mac: "52:54:00:c2:2d:3d", ip: "192.168.39.98"} in network mk-addons-959832
	I0906 18:29:53.410890   13823 main.go:141] libmachine: (addons-959832) DBG | Getting to WaitForSSH function...
	I0906 18:29:53.410935   13823 main.go:141] libmachine: (addons-959832) Reserved static IP address: 192.168.39.98
	I0906 18:29:53.410957   13823 main.go:141] libmachine: (addons-959832) Waiting for SSH to be available...
	I0906 18:29:53.413061   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:53.413353   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832
	I0906 18:29:53.413381   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find defined IP address of network mk-addons-959832 interface with MAC address 52:54:00:c2:2d:3d
	I0906 18:29:53.413528   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH client type: external
	I0906 18:29:53.413551   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa (-rw-------)
	I0906 18:29:53.413582   13823 main.go:141] libmachine: (addons-959832) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:29:53.413596   13823 main.go:141] libmachine: (addons-959832) DBG | About to run SSH command:
	I0906 18:29:53.413610   13823 main.go:141] libmachine: (addons-959832) DBG | exit 0
	I0906 18:29:53.424764   13823 main.go:141] libmachine: (addons-959832) DBG | SSH cmd err, output: exit status 255: 
	I0906 18:29:53.424790   13823 main.go:141] libmachine: (addons-959832) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0906 18:29:53.424803   13823 main.go:141] libmachine: (addons-959832) DBG | command : exit 0
	I0906 18:29:53.424811   13823 main.go:141] libmachine: (addons-959832) DBG | err     : exit status 255
	I0906 18:29:53.424834   13823 main.go:141] libmachine: (addons-959832) DBG | output  : 
	I0906 18:29:56.425071   13823 main.go:141] libmachine: (addons-959832) DBG | Getting to WaitForSSH function...
	I0906 18:29:56.427965   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.428313   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.428337   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.428498   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH client type: external
	I0906 18:29:56.428529   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa (-rw-------)
	I0906 18:29:56.428584   13823 main.go:141] libmachine: (addons-959832) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:29:56.428611   13823 main.go:141] libmachine: (addons-959832) DBG | About to run SSH command:
	I0906 18:29:56.428625   13823 main.go:141] libmachine: (addons-959832) DBG | exit 0
	I0906 18:29:56.557151   13823 main.go:141] libmachine: (addons-959832) DBG | SSH cmd err, output: <nil>: 
	I0906 18:29:56.557379   13823 main.go:141] libmachine: (addons-959832) KVM machine creation complete!
	I0906 18:29:56.557702   13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
	I0906 18:29:56.558229   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:56.558444   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:56.558623   13823 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:29:56.558641   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:29:56.559843   13823 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:29:56.559860   13823 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:29:56.559867   13823 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:29:56.559876   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.562179   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.562551   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.562587   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.562760   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.562922   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.563071   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.563184   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.563323   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.563491   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.563501   13823 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:29:56.672324   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:29:56.672345   13823 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:29:56.672355   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.675030   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.675361   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.675396   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.675587   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.675810   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.675962   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.676117   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.676285   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.676485   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.676498   13823 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:29:56.789500   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:29:56.789599   13823 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:29:56.789615   13823 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:29:56.789627   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:56.789887   13823 buildroot.go:166] provisioning hostname "addons-959832"
	I0906 18:29:56.789910   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:56.790145   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.792479   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.792813   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.792840   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.792964   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.793128   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.793278   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.793413   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.793564   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.793755   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.793770   13823 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-959832 && echo "addons-959832" | sudo tee /etc/hostname
	I0906 18:29:56.923171   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-959832
	
	I0906 18:29:56.923196   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.925829   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.926137   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.926165   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.926301   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.926516   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.926688   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.926855   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.927018   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.927167   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.927182   13823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-959832' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-959832/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-959832' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:29:57.047682   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:29:57.047717   13823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 18:29:57.047760   13823 buildroot.go:174] setting up certificates
	I0906 18:29:57.047779   13823 provision.go:84] configureAuth start
	I0906 18:29:57.047796   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:57.048060   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:57.050451   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.050790   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.050828   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.050983   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.053241   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.053584   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.053615   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.053778   13823 provision.go:143] copyHostCerts
	I0906 18:29:57.053849   13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 18:29:57.054015   13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 18:29:57.054086   13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 18:29:57.054144   13823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.addons-959832 san=[127.0.0.1 192.168.39.98 addons-959832 localhost minikube]
	I0906 18:29:57.192700   13823 provision.go:177] copyRemoteCerts
	I0906 18:29:57.192756   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:29:57.192779   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.195474   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.195742   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.195770   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.195927   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.196116   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.196268   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.196488   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.284813   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:29:57.312554   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 18:29:57.338356   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:29:57.363612   13823 provision.go:87] duration metric: took 315.815529ms to configureAuth
	I0906 18:29:57.363640   13823 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:29:57.363826   13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:29:57.363907   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.366452   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.366841   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.366868   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.367008   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.367195   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.367349   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.367475   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.367620   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:57.367765   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:57.367779   13823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 18:29:57.603163   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 18:29:57.603188   13823 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:29:57.603196   13823 main.go:141] libmachine: (addons-959832) Calling .GetURL
	I0906 18:29:57.604560   13823 main.go:141] libmachine: (addons-959832) DBG | Using libvirt version 6000000
	I0906 18:29:57.606895   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.607175   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.607201   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.607398   13823 main.go:141] libmachine: Docker is up and running!
	I0906 18:29:57.607413   13823 main.go:141] libmachine: Reticulating splines...
	I0906 18:29:57.607421   13823 client.go:171] duration metric: took 27.082788539s to LocalClient.Create
	I0906 18:29:57.607447   13823 start.go:167] duration metric: took 27.082857245s to libmachine.API.Create "addons-959832"
	I0906 18:29:57.607462   13823 start.go:293] postStartSetup for "addons-959832" (driver="kvm2")
	I0906 18:29:57.607488   13823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:29:57.607514   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.607782   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:29:57.607801   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.609814   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.610081   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.610134   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.610226   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.610417   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.610608   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.610769   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.695798   13823 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:29:57.700464   13823 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:29:57.700493   13823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 18:29:57.700596   13823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 18:29:57.700630   13823 start.go:296] duration metric: took 93.15804ms for postStartSetup
	I0906 18:29:57.700663   13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
	I0906 18:29:57.701257   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:57.704196   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.704554   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.704585   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.704877   13823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json ...
	I0906 18:29:57.705072   13823 start.go:128] duration metric: took 27.1982419s to createHost
	I0906 18:29:57.705098   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.707499   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.707842   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.707862   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.708035   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.708256   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.708433   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.708569   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.708760   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:57.708991   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:57.709005   13823 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:29:57.821756   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725647397.800291454
	
	I0906 18:29:57.821779   13823 fix.go:216] guest clock: 1725647397.800291454
	I0906 18:29:57.821789   13823 fix.go:229] Guest: 2024-09-06 18:29:57.800291454 +0000 UTC Remote: 2024-09-06 18:29:57.705083739 +0000 UTC m=+27.297090225 (delta=95.207715ms)
	I0906 18:29:57.821840   13823 fix.go:200] guest clock delta is within tolerance: 95.207715ms
	I0906 18:29:57.821853   13823 start.go:83] releasing machines lock for "addons-959832", held for 27.315095887s
	I0906 18:29:57.821881   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.822185   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:57.824591   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.824964   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.824991   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.825103   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.825621   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.825837   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.825955   13823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:29:57.825998   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.826048   13823 ssh_runner.go:195] Run: cat /version.json
	I0906 18:29:57.826075   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.828396   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.828722   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.828752   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.828771   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.828910   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.829111   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.829201   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.829221   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.829287   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.829450   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.829463   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.829621   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.829749   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.829859   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.948786   13823 ssh_runner.go:195] Run: systemctl --version
	I0906 18:29:57.955191   13823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 18:29:58.113311   13823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:29:58.119769   13823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:29:58.119846   13823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:29:58.135762   13823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:29:58.135789   13823 start.go:495] detecting cgroup driver to use...
	I0906 18:29:58.135859   13823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:29:58.151729   13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:29:58.166404   13823 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:29:58.166473   13823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:29:58.180954   13823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:29:58.195119   13823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:29:58.315328   13823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:29:58.467302   13823 docker.go:233] disabling docker service ...
	I0906 18:29:58.467362   13823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:29:58.482228   13823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:29:58.495471   13823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:29:58.606896   13823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:29:58.717897   13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:29:58.732638   13823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:29:58.751394   13823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 18:29:58.751461   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.762265   13823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 18:29:58.762343   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.772625   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.783002   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.793237   13823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:29:58.804024   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.814731   13823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.832054   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.842905   13823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:29:58.852537   13823 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 18:29:58.852595   13823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 18:29:58.866354   13823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:29:58.877194   13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:29:59.004604   13823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 18:29:59.101439   13823 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 18:29:59.101538   13823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 18:29:59.106286   13823 start.go:563] Will wait 60s for crictl version
	I0906 18:29:59.106358   13823 ssh_runner.go:195] Run: which crictl
	I0906 18:29:59.110304   13823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:29:59.148807   13823 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 18:29:59.148953   13823 ssh_runner.go:195] Run: crio --version
	I0906 18:29:59.178394   13823 ssh_runner.go:195] Run: crio --version
	I0906 18:29:59.210051   13823 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 18:29:59.211504   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:59.214173   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:59.214515   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:59.214548   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:59.214703   13823 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:29:59.218969   13823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:29:59.231960   13823 kubeadm.go:883] updating cluster {Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 18:29:59.232084   13823 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:29:59.232129   13823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:29:59.263727   13823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 18:29:59.263807   13823 ssh_runner.go:195] Run: which lz4
	I0906 18:29:59.267901   13823 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 18:29:59.271879   13823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 18:29:59.271906   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 18:30:00.584417   13823 crio.go:462] duration metric: took 1.316553716s to copy over tarball
	I0906 18:30:00.584486   13823 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 18:30:02.812933   13823 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.228424681s)
	I0906 18:30:02.812968   13823 crio.go:469] duration metric: took 2.22852468s to extract the tarball
	I0906 18:30:02.812978   13823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 18:30:02.850138   13823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:30:02.893341   13823 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 18:30:02.893365   13823 cache_images.go:84] Images are preloaded, skipping loading
	I0906 18:30:02.893375   13823 kubeadm.go:934] updating node { 192.168.39.98 8443 v1.31.0 crio true true} ...
	I0906 18:30:02.893497   13823 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-959832 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:30:02.893579   13823 ssh_runner.go:195] Run: crio config
	I0906 18:30:02.943751   13823 cni.go:84] Creating CNI manager for ""
	I0906 18:30:02.943774   13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:30:02.943794   13823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 18:30:02.943823   13823 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.98 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-959832 NodeName:addons-959832 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 18:30:02.943970   13823 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-959832"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 18:30:02.944029   13823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:30:02.953978   13823 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 18:30:02.954045   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 18:30:02.963215   13823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 18:30:02.979953   13823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:30:02.996152   13823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0906 18:30:03.012715   13823 ssh_runner.go:195] Run: grep 192.168.39.98	control-plane.minikube.internal$ /etc/hosts
	I0906 18:30:03.016576   13823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:30:03.028370   13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:03.151085   13823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:03.168582   13823 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832 for IP: 192.168.39.98
	I0906 18:30:03.168607   13823 certs.go:194] generating shared ca certs ...
	I0906 18:30:03.168628   13823 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.168788   13823 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 18:30:03.299866   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt ...
	I0906 18:30:03.299897   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt: {Name:mke2b7c471d9f59e720011f7b10016af11ee9297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.300069   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key ...
	I0906 18:30:03.300084   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key: {Name:mkfac70472d4bba2ebe5c985be8bd475bcc6f548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.300181   13823 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 18:30:03.425280   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt ...
	I0906 18:30:03.425310   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt: {Name:mk08fa1d396d35f7ec100676e804094098a4d70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.425492   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key ...
	I0906 18:30:03.425520   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key: {Name:mk8fe87021c9d97780410b17544e3c226973cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.425623   13823 certs.go:256] generating profile certs ...
	I0906 18:30:03.425675   13823 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key
	I0906 18:30:03.425689   13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt with IP's: []
	I0906 18:30:03.659418   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt ...
	I0906 18:30:03.659450   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: {Name:mk0f9c2f503201837abe2d4909970e9be7ff24f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.659616   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key ...
	I0906 18:30:03.659626   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key: {Name:mkdc65ba0a6775a2f0eae4f7b7974195d86c87d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.659695   13823 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e
	I0906 18:30:03.659712   13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.98]
	I0906 18:30:03.747012   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e ...
	I0906 18:30:03.747038   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e: {Name:mkac8ea9fd65a4ebd10dcac540165d914ce7db8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.747178   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e ...
	I0906 18:30:03.747192   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e: {Name:mk4a1ef0165a60b29c7ae52805cfb6305e8fcd01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.747259   13823 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt
	I0906 18:30:03.747327   13823 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key
	I0906 18:30:03.747377   13823 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key
	I0906 18:30:03.747394   13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt with IP's: []
	I0906 18:30:03.959127   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt ...
	I0906 18:30:03.959155   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt: {Name:mkde7bd5ab135e6d5e9a29c7a353c7a7ff8f667c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.959314   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key ...
	I0906 18:30:03.959329   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key: {Name:mkaff3d579d60be2767a53917ba5e3ae0b22c412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.959489   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:30:03.959520   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:30:03.959543   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:30:03.959565   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 18:30:03.960109   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:30:03.987472   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 18:30:04.010859   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:30:04.045335   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:30:04.069442   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 18:30:04.096260   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 18:30:04.121182   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:30:04.149817   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:30:04.173890   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:30:04.197498   13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 18:30:04.216950   13823 ssh_runner.go:195] Run: openssl version
	I0906 18:30:04.222654   13823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:30:04.233330   13823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:04.237701   13823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:04.237760   13823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:04.243532   13823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:30:04.256013   13823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:30:04.260734   13823 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:30:04.260787   13823 kubeadm.go:392] StartCluster: {Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:30:04.260898   13823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 18:30:04.260952   13823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 18:30:04.303067   13823 cri.go:89] found id: ""
	I0906 18:30:04.303126   13823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 18:30:04.313281   13823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 18:30:04.324983   13823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 18:30:04.335214   13823 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 18:30:04.335235   13823 kubeadm.go:157] found existing configuration files:
	
	I0906 18:30:04.335277   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 18:30:04.344648   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 18:30:04.344695   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 18:30:04.354421   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 18:30:04.363814   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 18:30:04.363883   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 18:30:04.373191   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 18:30:04.382426   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 18:30:04.382489   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 18:30:04.392389   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 18:30:04.402110   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 18:30:04.402181   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 18:30:04.411730   13823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 18:30:04.463645   13823 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 18:30:04.463694   13823 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 18:30:04.559431   13823 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 18:30:04.559574   13823 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 18:30:04.559691   13823 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 18:30:04.568785   13823 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 18:30:04.633550   13823 out.go:235]   - Generating certificates and keys ...
	I0906 18:30:04.633656   13823 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 18:30:04.633738   13823 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 18:30:04.850232   13823 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 18:30:05.028833   13823 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 18:30:05.198669   13823 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 18:30:05.265171   13823 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 18:30:05.396138   13823 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 18:30:05.396314   13823 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-959832 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
	I0906 18:30:05.615454   13823 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 18:30:05.615825   13823 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-959832 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
	I0906 18:30:05.699300   13823 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 18:30:05.879000   13823 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 18:30:05.979662   13823 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 18:30:05.979866   13823 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 18:30:06.143465   13823 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 18:30:06.399160   13823 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 18:30:06.612959   13823 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 18:30:06.801192   13823 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 18:30:06.957635   13823 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 18:30:06.958075   13823 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 18:30:06.960513   13823 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 18:30:06.962637   13823 out.go:235]   - Booting up control plane ...
	I0906 18:30:06.962755   13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 18:30:06.962853   13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 18:30:06.962936   13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 18:30:06.982006   13823 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 18:30:06.987635   13823 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 18:30:06.987741   13823 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 18:30:07.107392   13823 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 18:30:07.107507   13823 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 18:30:07.608684   13823 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.950467ms
	I0906 18:30:07.608794   13823 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 18:30:12.608494   13823 kubeadm.go:310] [api-check] The API server is healthy after 5.001776937s
	I0906 18:30:12.627560   13823 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 18:30:12.653476   13823 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 18:30:12.689334   13823 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 18:30:12.689602   13823 kubeadm.go:310] [mark-control-plane] Marking the node addons-959832 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 18:30:12.704990   13823 kubeadm.go:310] [bootstrap-token] Using token: ithoaf.u83bc4nltc0uwhpo
	I0906 18:30:12.706456   13823 out.go:235]   - Configuring RBAC rules ...
	I0906 18:30:12.706574   13823 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 18:30:12.717372   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 18:30:12.735384   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 18:30:12.742188   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 18:30:12.748903   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 18:30:12.753193   13823 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 18:30:13.018036   13823 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 18:30:13.440120   13823 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 18:30:14.029827   13823 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 18:30:14.029853   13823 kubeadm.go:310] 
	I0906 18:30:14.029954   13823 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 18:30:14.029981   13823 kubeadm.go:310] 
	I0906 18:30:14.030093   13823 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 18:30:14.030104   13823 kubeadm.go:310] 
	I0906 18:30:14.030140   13823 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 18:30:14.030226   13823 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 18:30:14.030309   13823 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 18:30:14.030318   13823 kubeadm.go:310] 
	I0906 18:30:14.030403   13823 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 18:30:14.030428   13823 kubeadm.go:310] 
	I0906 18:30:14.030488   13823 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 18:30:14.030498   13823 kubeadm.go:310] 
	I0906 18:30:14.030561   13823 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 18:30:14.030660   13823 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 18:30:14.030776   13823 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 18:30:14.030796   13823 kubeadm.go:310] 
	I0906 18:30:14.030915   13823 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 18:30:14.031015   13823 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 18:30:14.031028   13823 kubeadm.go:310] 
	I0906 18:30:14.031132   13823 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ithoaf.u83bc4nltc0uwhpo \
	I0906 18:30:14.031273   13823 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 18:30:14.031306   13823 kubeadm.go:310] 	--control-plane 
	I0906 18:30:14.031316   13823 kubeadm.go:310] 
	I0906 18:30:14.031450   13823 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 18:30:14.031472   13823 kubeadm.go:310] 
	I0906 18:30:14.031592   13823 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ithoaf.u83bc4nltc0uwhpo \
	I0906 18:30:14.031750   13823 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 18:30:14.032620   13823 kubeadm.go:310] W0906 18:30:04.444733     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:14.033044   13823 kubeadm.go:310] W0906 18:30:04.446560     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:14.033225   13823 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 18:30:14.033247   13823 cni.go:84] Creating CNI manager for ""
	I0906 18:30:14.033257   13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:30:14.035685   13823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 18:30:14.037043   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 18:30:14.051040   13823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 18:30:14.080330   13823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 18:30:14.080403   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:14.080418   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-959832 minikube.k8s.io/updated_at=2024_09_06T18_30_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-959832 minikube.k8s.io/primary=true
	I0906 18:30:14.123199   13823 ops.go:34] apiserver oom_adj: -16
	I0906 18:30:14.247505   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:14.748250   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:15.248440   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:15.747562   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:16.247913   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:16.747636   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.248181   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.748128   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.838400   13823 kubeadm.go:1113] duration metric: took 3.758062138s to wait for elevateKubeSystemPrivileges
	I0906 18:30:17.838441   13823 kubeadm.go:394] duration metric: took 13.577657427s to StartCluster
	I0906 18:30:17.838464   13823 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:17.838613   13823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:30:17.839096   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:17.839337   13823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 18:30:17.839344   13823 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:30:17.839425   13823 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 18:30:17.839549   13823 addons.go:69] Setting yakd=true in profile "addons-959832"
	I0906 18:30:17.839564   13823 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-959832"
	I0906 18:30:17.839564   13823 addons.go:69] Setting helm-tiller=true in profile "addons-959832"
	I0906 18:30:17.839600   13823 addons.go:69] Setting storage-provisioner=true in profile "addons-959832"
	I0906 18:30:17.839601   13823 addons.go:69] Setting inspektor-gadget=true in profile "addons-959832"
	I0906 18:30:17.839616   13823 addons.go:234] Setting addon storage-provisioner=true in "addons-959832"
	I0906 18:30:17.839621   13823 addons.go:234] Setting addon inspektor-gadget=true in "addons-959832"
	I0906 18:30:17.839625   13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:30:17.839635   13823 addons.go:234] Setting addon helm-tiller=true in "addons-959832"
	I0906 18:30:17.839624   13823 addons.go:69] Setting ingress-dns=true in profile "addons-959832"
	I0906 18:30:17.839656   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839680   13823 addons.go:234] Setting addon ingress-dns=true in "addons-959832"
	I0906 18:30:17.839708   13823 addons.go:69] Setting metrics-server=true in profile "addons-959832"
	I0906 18:30:17.839721   13823 addons.go:69] Setting gcp-auth=true in profile "addons-959832"
	I0906 18:30:17.839706   13823 addons.go:69] Setting ingress=true in profile "addons-959832"
	I0906 18:30:17.839737   13823 addons.go:234] Setting addon metrics-server=true in "addons-959832"
	I0906 18:30:17.839738   13823 mustload.go:65] Loading cluster: addons-959832
	I0906 18:30:17.839744   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839683   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839951   13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:30:17.840149   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840201   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.840215   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840233   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.839763   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.840319   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840341   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.840156   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.839590   13823 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-959832"
	I0906 18:30:17.840465   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.840490   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839591   13823 addons.go:69] Setting registry=true in profile "addons-959832"
	I0906 18:30:17.840596   13823 addons.go:234] Setting addon registry=true in "addons-959832"
	I0906 18:30:17.840637   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.840665   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840688   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.841280   13823 out.go:177] * Verifying Kubernetes components...
	I0906 18:30:17.839582   13823 addons.go:234] Setting addon yakd=true in "addons-959832"
	I0906 18:30:17.841416   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839685   13823 addons.go:69] Setting volcano=true in profile "addons-959832"
	I0906 18:30:17.841566   13823 addons.go:234] Setting addon volcano=true in "addons-959832"
	I0906 18:30:17.839689   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.841626   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.841783   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.841812   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.841859   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.839695   13823 addons.go:69] Setting cloud-spanner=true in profile "addons-959832"
	I0906 18:30:17.841931   13823 addons.go:234] Setting addon cloud-spanner=true in "addons-959832"
	I0906 18:30:17.841963   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.841970   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.841989   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.841816   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.842303   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.842321   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.842543   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.842595   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.839696   13823 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-959832"
	I0906 18:30:17.842884   13823 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-959832"
	I0906 18:30:17.839699   13823 addons.go:69] Setting volumesnapshots=true in profile "addons-959832"
	I0906 18:30:17.839713   13823 addons.go:69] Setting default-storageclass=true in profile "addons-959832"
	I0906 18:30:17.839762   13823 addons.go:234] Setting addon ingress=true in "addons-959832"
	I0906 18:30:17.842835   13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:17.839705   13823 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-959832"
	I0906 18:30:17.843210   13823 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-959832"
	I0906 18:30:17.843351   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.843531   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.843563   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.843835   13823 addons.go:234] Setting addon volumesnapshots=true in "addons-959832"
	I0906 18:30:17.843857   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.844006   13823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-959832"
	I0906 18:30:17.844352   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.844369   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.853075   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.861521   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0906 18:30:17.862212   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.862927   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.862953   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.863254   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
	I0906 18:30:17.863342   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.863358   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0906 18:30:17.864034   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.864195   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.864234   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.864508   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.864529   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.864924   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.868974   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.869351   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.869398   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.869553   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.869575   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.879527   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39157
	I0906 18:30:17.879542   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0906 18:30:17.879654   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0906 18:30:17.879684   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0906 18:30:17.879760   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.881648   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.885011   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.885160   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I0906 18:30:17.885420   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.885459   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.885971   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.886011   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.886343   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.886375   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.886602   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.886665   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.886686   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.886716   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.886809   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
	I0906 18:30:17.886904   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
	I0906 18:30:17.887101   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887199   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887215   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887238   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887599   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.888208   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.888371   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888383   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888541   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888561   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888566   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.888701   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888711   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888743   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888754   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888780   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.889687   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.889730   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.889761   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.889889   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.889901   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.889943   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.889978   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.890062   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.890069   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.890553   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.890607   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.891323   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.891899   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.891930   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.892658   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.892934   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.893002   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.893143   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.893184   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.893806   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.893854   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.894913   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.894960   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.895352   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.895805   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.895847   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.897573   13823 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0906 18:30:17.899434   13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 18:30:17.899459   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0906 18:30:17.899481   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.903071   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.903469   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.903516   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.903739   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.903926   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.904048   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.904161   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.911366   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
	I0906 18:30:17.912019   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.912706   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.912741   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.913185   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.913911   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.913970   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.916304   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0906 18:30:17.916921   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.917609   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.917631   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.918094   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.918809   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.918849   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.920068   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34889
	I0906 18:30:17.920527   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.921055   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.921080   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.921442   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.921621   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.923561   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.924047   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0906 18:30:17.924598   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.925400   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.925427   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.925816   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.925833   13823 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 18:30:17.926025   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.927332   13823 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 18:30:17.927362   13823 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 18:30:17.927413   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.928541   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.931169   13823 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 18:30:17.932027   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.932560   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.932588   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.932970   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0906 18:30:17.933032   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 18:30:17.933049   13823 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 18:30:17.933073   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.933158   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.933325   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.933426   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.933566   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.934213   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.934915   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.934933   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.935404   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.935557   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0906 18:30:17.935722   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.936009   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.936810   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42513
	I0906 18:30:17.937524   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.938126   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.938143   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.938211   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.938388   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.938402   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.938499   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.938891   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.938931   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.938946   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.938969   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.939155   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.939625   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.939703   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.939744   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.939784   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.939923   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.940763   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.941678   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.943308   13823 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0906 18:30:17.943311   13823 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 18:30:17.944079   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41849
	I0906 18:30:17.944771   13823 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:17.944801   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 18:30:17.944819   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.944775   13823 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:17.944907   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 18:30:17.944920   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.948201   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.948657   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.948689   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.948842   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.949234   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.949990   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.950029   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.950282   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.950943   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42051
	I0906 18:30:17.950969   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.950989   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.951044   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
	I0906 18:30:17.951238   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.951466   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.951515   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.951465   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.952056   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.952066   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.952073   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.952082   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.952138   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.952155   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.952344   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.952631   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.952687   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.952826   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.952846   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.953106   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.953314   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.953375   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0906 18:30:17.953914   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.953936   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.954109   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.954862   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0906 18:30:17.955016   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.955377   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.955393   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.955452   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.955793   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.955962   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.955973   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0906 18:30:17.956660   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.956816   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.956830   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.957324   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.957345   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.957414   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.957813   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.957859   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.958442   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.958480   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.959016   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.960122   13823 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-959832"
	I0906 18:30:17.960157   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.960504   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.960508   13823 addons.go:234] Setting addon default-storageclass=true in "addons-959832"
	I0906 18:30:17.960533   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.960553   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.960773   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:17.960927   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.960957   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.961028   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0906 18:30:17.963299   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0906 18:30:17.963616   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.964149   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.964171   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.964676   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.964848   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.965817   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:17.966420   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.967088   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0906 18:30:17.967322   13823 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:17.967345   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 18:30:17.967363   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.967560   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.968670   13823 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 18:30:17.969763   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.969781   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.970095   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 18:30:17.970112   13823 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 18:30:17.970131   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.970337   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0906 18:30:17.970743   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.971382   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.971385   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.971412   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.972059   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.972078   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.972319   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.972519   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.972712   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.972912   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.973203   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.974390   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.974410   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.975147   13823 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 18:30:17.975803   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.976343   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.976370   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.976539   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.976705   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.976816   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.976940   13823 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:17.976955   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 18:30:17.976970   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.977663   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.978180   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.978553   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.980971   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.981520   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.981539   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.981727   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.981897   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.982079   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.982239   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.983455   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
	I0906 18:30:17.983619   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0906 18:30:17.984075   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.984656   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.984672   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.984763   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0906 18:30:17.984898   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.985019   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.985969   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.985992   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.986044   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.986161   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.986175   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.986855   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.986875   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.987256   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.987509   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.988050   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.988397   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 18:30:17.988950   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I0906 18:30:17.989105   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.989288   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.989355   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.989528   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.989938   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.989956   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.990021   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:17.990028   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:17.990027   13823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 18:30:17.990240   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:17.990252   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:17.990260   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:17.990268   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:17.990348   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.990523   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:17.990554   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:17.990563   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 18:30:17.990634   13823 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0906 18:30:17.990673   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.990882   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.991485   13823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:17.991505   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 18:30:17.991523   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.992446   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 18:30:17.992494   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 18:30:17.992990   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I0906 18:30:17.993671   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.994204   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 18:30:17.994221   13823 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 18:30:17.994276   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.994304   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.994314   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.994319   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.994675   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.994705   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.995095   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.995127   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.995287   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.995320   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 18:30:17.995468   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.995609   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.995687   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.995715   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.995789   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.996063   13823 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 18:30:17.997430   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.997701   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 18:30:17.997900   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.997927   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.998085   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.998251   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.999429   13823 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 18:30:18.000423   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33437
	I0906 18:30:18.000443   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.000610   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 18:30:18.000700   13823 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 18:30:18.000713   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 18:30:18.000733   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.000992   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.001111   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:18.001653   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:18.001671   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:18.002038   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:18.002683   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:18.002727   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:18.003368   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 18:30:18.003618   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.003952   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.003970   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.004139   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.004273   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.004359   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.004434   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.005728   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 18:30:18.006862   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 18:30:18.007852   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 18:30:18.007870   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 18:30:18.007888   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.010752   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.011133   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.011162   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.011278   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.011435   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.011556   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.011677   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.019869   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0906 18:30:18.025324   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:18.025853   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:18.025867   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	W0906 18:30:18.026199   13823 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37452->192.168.39.98:22: read: connection reset by peer
	I0906 18:30:18.026228   13823 retry.go:31] will retry after 165.921545ms: ssh: handshake failed: read tcp 192.168.39.1:37452->192.168.39.98:22: read: connection reset by peer
	I0906 18:30:18.026287   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:18.026483   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:18.028221   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:18.028440   13823 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:18.028451   13823 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 18:30:18.028463   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.030594   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.030951   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.030970   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.031122   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.031278   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.031416   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.031526   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.046424   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I0906 18:30:18.046881   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:18.047847   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:18.047876   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:18.048219   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:18.048439   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:18.050153   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:18.052332   13823 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 18:30:18.054123   13823 out.go:177]   - Using image docker.io/busybox:stable
	I0906 18:30:18.055683   13823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:18.055715   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 18:30:18.055735   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.058890   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.059267   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.059308   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.059467   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.059660   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.059835   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.059965   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.325758   13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 18:30:18.325780   13823 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0906 18:30:18.462745   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:18.498367   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:18.542161   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 18:30:18.542189   13823 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 18:30:18.544357   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 18:30:18.544383   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 18:30:18.562318   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:18.591769   13823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:18.592321   13823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 18:30:18.615892   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:18.619170   13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 18:30:18.619198   13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 18:30:18.623393   13823 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 18:30:18.623412   13823 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 18:30:18.632558   13823 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 18:30:18.632587   13823 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 18:30:18.642554   13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:18.642577   13823 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0906 18:30:18.646434   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:18.712949   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:18.744354   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 18:30:18.744376   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 18:30:18.745893   13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 18:30:18.745909   13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 18:30:18.758057   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:18.794329   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 18:30:18.794351   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 18:30:18.810523   13823 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:18.810541   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 18:30:18.819725   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:18.820412   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 18:30:18.820430   13823 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 18:30:18.870635   13823 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 18:30:18.870657   13823 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 18:30:18.955167   13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 18:30:18.955193   13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 18:30:19.024347   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 18:30:19.024371   13823 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 18:30:19.036090   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 18:30:19.036117   13823 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 18:30:19.061575   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 18:30:19.061599   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 18:30:19.063347   13823 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 18:30:19.063362   13823 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 18:30:19.071318   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:19.185778   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 18:30:19.185801   13823 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 18:30:19.198921   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:19.198940   13823 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 18:30:19.225401   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:19.225422   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 18:30:19.250965   13823 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 18:30:19.250991   13823 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 18:30:19.295032   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 18:30:19.295064   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 18:30:19.560881   13823 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:19.560903   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 18:30:19.605732   13823 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 18:30:19.605761   13823 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 18:30:19.605857   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:19.639600   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 18:30:19.639626   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 18:30:19.651766   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:19.815029   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:19.831850   13823 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 18:30:19.831883   13823 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 18:30:19.953978   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 18:30:19.953997   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 18:30:20.091151   13823 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:20.091171   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 18:30:20.208365   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 18:30:20.208395   13823 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 18:30:20.322907   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:20.592180   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 18:30:20.592203   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 18:30:20.866215   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 18:30:20.866237   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 18:30:21.296320   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:21.296345   13823 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 18:30:21.533570   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:23.237459   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.774672195s)
	I0906 18:30:23.237524   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.237547   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.237911   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.237986   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.238006   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.238024   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.238036   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.238294   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.238313   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.751842   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.253438201s)
	I0906 18:30:23.751900   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.751914   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.751912   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.18956267s)
	I0906 18:30:23.751952   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.751967   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752014   13823 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.160216467s)
	I0906 18:30:23.752042   13823 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.159701916s)
	I0906 18:30:23.752057   13823 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 18:30:23.752091   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.136171256s)
	I0906 18:30:23.752131   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752144   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752372   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752387   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.752396   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752402   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752419   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752432   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.752442   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752445   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752450   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752518   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752555   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752587   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.752603   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752619   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752674   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752715   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752737   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752746   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.753079   13823 node_ready.go:35] waiting up to 6m0s for node "addons-959832" to be "Ready" ...
	I0906 18:30:23.753223   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.753238   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.753335   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.753364   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.753380   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.817790   13823 node_ready.go:49] node "addons-959832" has status "Ready":"True"
	I0906 18:30:23.817814   13823 node_ready.go:38] duration metric: took 64.714897ms for node "addons-959832" to be "Ready" ...
	I0906 18:30:23.817823   13823 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:23.864694   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.864718   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.864768   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.864803   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.865089   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.865109   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.865155   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.865189   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.865203   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 18:30:23.865293   13823 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0906 18:30:23.895688   13823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:24.386851   13823 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-959832" context rescaled to 1 replicas
	I0906 18:30:24.986957   13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 18:30:24.987010   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:24.990148   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:24.990559   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:24.990592   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:24.990724   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:24.990958   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:24.991131   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:24.991298   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:25.501366   13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 18:30:25.593869   13823 addons.go:234] Setting addon gcp-auth=true in "addons-959832"
	I0906 18:30:25.593929   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:25.594221   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:25.594261   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:25.609081   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0906 18:30:25.609512   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:25.609995   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:25.610010   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:25.610361   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:25.610997   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:25.611034   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:25.625831   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0906 18:30:25.626278   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:25.626760   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:25.626788   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:25.627170   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:25.627386   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:25.629014   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:25.629236   13823 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 18:30:25.629259   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:25.631653   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:25.632049   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:25.632077   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:25.632216   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:25.632399   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:25.632555   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:25.632700   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:25.941079   13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:27.481753   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.835292795s)
	I0906 18:30:27.481764   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.768781047s)
	I0906 18:30:27.481804   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481809   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.723718351s)
	I0906 18:30:27.481827   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481815   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481841   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481846   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481854   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481864   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.662110283s)
	I0906 18:30:27.481888   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481903   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481917   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.410575966s)
	I0906 18:30:27.481932   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481941   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481953   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.876072516s)
	I0906 18:30:27.481973   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481985   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482084   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.830290669s)
	I0906 18:30:27.482101   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482111   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482256   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.667196336s)
	I0906 18:30:27.482281   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	W0906 18:30:27.482296   13823 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:27.482317   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482323   13823 retry.go:31] will retry after 254.362145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:27.482304   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482348   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482355   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482362   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482365   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482369   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482372   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482374   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482381   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482386   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482391   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482395   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482402   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482411   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482419   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482426   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482399   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.159419479s)
	I0906 18:30:27.482444   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482451   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482456   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482461   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482466   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482475   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482891   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482928   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482936   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482392   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482433   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.484341   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.484358   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.484374   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.484397   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.484405   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.484413   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.484420   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.484462   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.484469   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.484477   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.484484   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.485863   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485876   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485887   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485896   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485904   13823 addons.go:475] Verifying addon metrics-server=true in "addons-959832"
	I0906 18:30:27.485927   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.485930   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485938   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485943   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485946   13823 addons.go:475] Verifying addon ingress=true in "addons-959832"
	I0906 18:30:27.485950   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485997   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486046   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486077   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.486084   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485864   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486513   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486554   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.486562   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.487477   13823 out.go:177] * Verifying ingress addon...
	I0906 18:30:27.487573   13823 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-959832 service yakd-dashboard -n yakd-dashboard
	
	I0906 18:30:27.486024   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.487691   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.487717   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.487728   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.487937   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.487952   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.487960   13823 addons.go:475] Verifying addon registry=true in "addons-959832"
	I0906 18:30:27.487962   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.489109   13823 out.go:177] * Verifying registry addon...
	I0906 18:30:27.490025   13823 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 18:30:27.490703   13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 18:30:27.494994   13823 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 18:30:27.495014   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:27.495422   13823 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 18:30:27.495442   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:27.737115   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:27.995783   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:27.996316   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:28.405776   13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:28.525889   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:28.526140   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.000232   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:29.000400   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.288925   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.755298783s)
	I0906 18:30:29.288949   13823 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.659689548s)
	I0906 18:30:29.288969   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.288980   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.289345   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.289363   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.289373   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.289381   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.289348   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:29.289643   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.289659   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.289670   13823 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-959832"
	I0906 18:30:29.290527   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:29.291464   13823 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 18:30:29.293133   13823 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 18:30:29.293804   13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 18:30:29.294483   13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 18:30:29.294501   13823 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 18:30:29.307557   13823 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 18:30:29.307575   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:29.501347   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:29.502636   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.549399   13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 18:30:29.549424   13823 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 18:30:29.631326   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.894156301s)
	I0906 18:30:29.631395   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.631409   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.631783   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.631805   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.631809   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:29.631815   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.631831   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.632053   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.632067   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.711353   13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:29.711373   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 18:30:29.758533   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:29.798367   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:29.994829   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.995464   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:30.298814   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:30.494755   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:30.495217   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:30.800377   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:30.927844   13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:31.011246   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:31.011996   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:31.259074   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.500495277s)
	I0906 18:30:31.259136   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:31.259150   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:31.259463   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:31.259567   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:31.259547   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:31.259579   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:31.259614   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:31.259913   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:31.259930   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:31.259955   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:31.261909   13823 addons.go:475] Verifying addon gcp-auth=true in "addons-959832"
	I0906 18:30:31.263787   13823 out.go:177] * Verifying gcp-auth addon...
	I0906 18:30:31.265893   13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 18:30:31.298469   13823 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:30:31.298489   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:31.300480   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:31.497017   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:31.497257   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:31.769388   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:31.798048   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:31.995495   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:31.995656   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:32.269836   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:32.298842   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:32.495206   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:32.496478   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:32.769455   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:32.798535   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:32.905084   13823 pod_ready.go:98] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.98 HostIPs:[{IP:192.168.39.
98}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-06 18:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:23 +0000 UTC,FinishedAt:2024-09-06 18:30:30 +0000 UTC,ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2 Started:0xc0020651d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000b9f530} {Name:kube-api-access-fjvjc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000b9f540}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0906 18:30:32.905113   13823 pod_ready.go:82] duration metric: took 9.009398679s for pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace to be "Ready" ...
	E0906 18:30:32.905127   13823 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.98 HostIPs:[{IP:192.168.39.98}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-06 18:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:23 +0000 UTC,FinishedAt:2024-09-06 18:30:30 +0000 UTC,ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2 Started:0xc0020651d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000b9f530} {Name:kube-api-access-fjvjc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc000b9f540}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0906 18:30:32.905141   13823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.911075   13823 pod_ready.go:93] pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.911105   13823 pod_ready.go:82] duration metric: took 5.954486ms for pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.911119   13823 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.928213   13823 pod_ready.go:93] pod "etcd-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.928234   13823 pod_ready.go:82] duration metric: took 17.107089ms for pod "etcd-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.928244   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.942443   13823 pod_ready.go:93] pod "kube-apiserver-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.942474   13823 pod_ready.go:82] duration metric: took 14.222157ms for pod "kube-apiserver-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.942489   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.948544   13823 pod_ready.go:93] pod "kube-controller-manager-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.948568   13823 pod_ready.go:82] duration metric: took 6.069443ms for pod "kube-controller-manager-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.948594   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-df5wg" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.995554   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:32.996027   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:33.270077   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:33.300133   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:33.300322   13823 pod_ready.go:93] pod "kube-proxy-df5wg" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:33.300343   13823 pod_ready.go:82] duration metric: took 351.740369ms for pod "kube-proxy-df5wg" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.300356   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.494781   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:33.495847   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:33.701424   13823 pod_ready.go:93] pod "kube-scheduler-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:33.701467   13823 pod_ready.go:82] duration metric: took 401.098684ms for pod "kube-scheduler-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.701495   13823 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.769360   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:33.798021   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:33.995683   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:33.997103   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.270015   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:34.299221   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:34.495406   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:34.496126   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.770094   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:34.799237   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:34.996508   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.997585   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:35.270568   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:35.299394   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:35.495141   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:35.495320   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:35.707531   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:35.770986   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:35.800293   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:35.996725   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:35.997639   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:36.270981   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:36.303214   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:36.494976   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:36.496783   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.771081   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:36.799874   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:36.995676   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.996010   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:37.270120   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:37.299046   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:37.494705   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:37.496067   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.707603   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:37.769678   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:37.798583   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:37.995037   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.995885   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:38.269217   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:38.298643   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:38.495448   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:38.495856   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:38.769730   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:38.799711   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.083640   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.083787   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:39.496519   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:39.496908   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:39.497701   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.499783   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.769883   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:39.798544   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.994338   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.995398   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:40.209006   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:40.272568   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:40.301397   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.498136   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.498526   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:40.770814   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:40.798522   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.994052   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.995394   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.270657   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:41.298770   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.498318   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.498596   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:41.770854   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:41.799666   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.995027   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.995612   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.270017   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:42.299094   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.592984   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.595535   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:42.721960   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:42.772381   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:42.799751   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.995172   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.995508   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.272873   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:43.298467   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.494939   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.495402   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.769785   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:43.798713   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.996443   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.996744   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.269175   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:44.308002   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.494478   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:44.494986   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.770210   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:44.797768   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.995782   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.997472   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.207350   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:45.269487   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:45.298388   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.494409   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.494479   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.769970   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:45.798375   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.995583   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.995736   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.269632   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:46.299154   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.495331   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.495578   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:46.769857   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:46.799172   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.995967   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.996352   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.207412   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:47.270222   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:47.300058   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.501228   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:47.501496   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.769887   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:47.798711   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.994453   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.994618   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.270499   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:48.298587   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.494874   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.494941   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.771487   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:48.799341   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.995078   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.995997   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.270055   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:49.297759   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.493704   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.496397   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.707766   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:49.769942   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:49.799020   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.994521   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.995871   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.269405   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:50.298442   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.495620   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:50.496486   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.876382   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:50.877156   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.996700   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.996938   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.269377   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:51.298953   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.495015   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.495481   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.708764   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:51.770620   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:51.798067   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.994702   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.995528   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.269440   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:52.298688   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.496129   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.497284   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:52.769844   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:52.799404   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.995549   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.995828   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.272511   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:53.299182   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.495690   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.498212   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.769884   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:53.799759   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.994840   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.994970   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.208168   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:54.270994   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:54.301366   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.494638   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:54.495314   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.769283   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:54.797866   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.272696   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.272743   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:55.272998   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.298147   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.495547   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.495711   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.770496   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:55.802302   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.995386   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.995623   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:56.268801   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:56.298461   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.494963   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.495882   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.291534   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.291868   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:57.292073   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:57.292099   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.293348   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.309051   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:57.309858   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.312884   13823 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:57.312900   13823 pod_ready.go:82] duration metric: took 23.611395425s for pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:57.312922   13823 pod_ready.go:39] duration metric: took 33.495084445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:57.312943   13823 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:30:57.312998   13823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:57.342569   13823 api_server.go:72] duration metric: took 39.503199537s to wait for apiserver process to appear ...
	I0906 18:30:57.342597   13823 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:30:57.342618   13823 api_server.go:253] Checking apiserver healthz at https://192.168.39.98:8443/healthz ...
	I0906 18:30:57.347032   13823 api_server.go:279] https://192.168.39.98:8443/healthz returned 200:
	ok
	I0906 18:30:57.348263   13823 api_server.go:141] control plane version: v1.31.0
	I0906 18:30:57.348287   13823 api_server.go:131] duration metric: took 5.682402ms to wait for apiserver health ...
	I0906 18:30:57.348297   13823 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:30:57.359723   13823 system_pods.go:59] 18 kube-system pods found
	I0906 18:30:57.359757   13823 system_pods.go:61] "coredns-6f6b679f8f-d5d26" [8f56a285-a4a2-42b2-b904-86d4b92e1593] Running
	I0906 18:30:57.359769   13823 system_pods.go:61] "csi-hostpath-attacher-0" [077a752a-2398-4e94-b907-d0888261774c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:57.359778   13823 system_pods.go:61] "csi-hostpath-resizer-0" [4d49487b-d00b-4ee7-8007-fc440aad009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:30:57.359790   13823 system_pods.go:61] "csi-hostpathplugin-j7df9" [146029b8-76c4-479b-8217-00a90921e5d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:57.359800   13823 system_pods.go:61] "etcd-addons-959832" [2517086a-0030-456f-a07a-8973652d205c] Running
	I0906 18:30:57.359806   13823 system_pods.go:61] "kube-apiserver-addons-959832" [c93b4ce0-62b0-4e1f-9a98-76b6e7ad4fbc] Running
	I0906 18:30:57.359815   13823 system_pods.go:61] "kube-controller-manager-addons-959832" [3dc3e2e0-cdf7-4d83-8d8e-5cc86d87c45b] Running
	I0906 18:30:57.359820   13823 system_pods.go:61] "kube-ingress-dns-minikube" [1673a19c-a4a9-4d9d-bda1-e073fb44b3d8] Running
	I0906 18:30:57.359826   13823 system_pods.go:61] "kube-proxy-df5wg" [f92f8a67-fa25-410a-b7f6-928c602e53e5] Running
	I0906 18:30:57.359829   13823 system_pods.go:61] "kube-scheduler-addons-959832" [0a2458fe-333d-4ca7-b2ab-c58159f3a491] Running
	I0906 18:30:57.359834   13823 system_pods.go:61] "metrics-server-84c5f94fbc-flnx5" [01d423d8-1a69-47b2-be5a-57dc6f3f7268] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:57.359840   13823 system_pods.go:61] "nvidia-device-plugin-daemonset-nsxpz" [c35f7718-6879-4edb-9a8b-5b4a82ad2a7c] Running
	I0906 18:30:57.359846   13823 system_pods.go:61] "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:57.359852   13823 system_pods.go:61] "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:57.359858   13823 system_pods.go:61] "snapshot-controller-56fcc65765-db2j5" [afcb8d14-41d7-444b-b16d-496ca520ee39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.359867   13823 system_pods.go:61] "snapshot-controller-56fcc65765-jjdrv" [d3df181f-bfa3-4ef4-9767-ecc84c335cc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.359871   13823 system_pods.go:61] "storage-provisioner" [a837ebf7-7140-4baa-8b93-ea556996b204] Running
	I0906 18:30:57.359877   13823 system_pods.go:61] "tiller-deploy-b48cc5f79-d2ggh" [5951b042-9892-4eb8-b567-933475c4a163] Running
	I0906 18:30:57.359885   13823 system_pods.go:74] duration metric: took 11.581782ms to wait for pod list to return data ...
	I0906 18:30:57.359894   13823 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:30:57.364154   13823 default_sa.go:45] found service account: "default"
	I0906 18:30:57.364173   13823 default_sa.go:55] duration metric: took 4.273217ms for default service account to be created ...
	I0906 18:30:57.364181   13823 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:30:57.373118   13823 system_pods.go:86] 18 kube-system pods found
	I0906 18:30:57.373150   13823 system_pods.go:89] "coredns-6f6b679f8f-d5d26" [8f56a285-a4a2-42b2-b904-86d4b92e1593] Running
	I0906 18:30:57.373165   13823 system_pods.go:89] "csi-hostpath-attacher-0" [077a752a-2398-4e94-b907-d0888261774c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:57.373175   13823 system_pods.go:89] "csi-hostpath-resizer-0" [4d49487b-d00b-4ee7-8007-fc440aad009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:30:57.373194   13823 system_pods.go:89] "csi-hostpathplugin-j7df9" [146029b8-76c4-479b-8217-00a90921e5d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:57.373202   13823 system_pods.go:89] "etcd-addons-959832" [2517086a-0030-456f-a07a-8973652d205c] Running
	I0906 18:30:57.373217   13823 system_pods.go:89] "kube-apiserver-addons-959832" [c93b4ce0-62b0-4e1f-9a98-76b6e7ad4fbc] Running
	I0906 18:30:57.373223   13823 system_pods.go:89] "kube-controller-manager-addons-959832" [3dc3e2e0-cdf7-4d83-8d8e-5cc86d87c45b] Running
	I0906 18:30:57.373227   13823 system_pods.go:89] "kube-ingress-dns-minikube" [1673a19c-a4a9-4d9d-bda1-e073fb44b3d8] Running
	I0906 18:30:57.373230   13823 system_pods.go:89] "kube-proxy-df5wg" [f92f8a67-fa25-410a-b7f6-928c602e53e5] Running
	I0906 18:30:57.373237   13823 system_pods.go:89] "kube-scheduler-addons-959832" [0a2458fe-333d-4ca7-b2ab-c58159f3a491] Running
	I0906 18:30:57.373242   13823 system_pods.go:89] "metrics-server-84c5f94fbc-flnx5" [01d423d8-1a69-47b2-be5a-57dc6f3f7268] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:57.373246   13823 system_pods.go:89] "nvidia-device-plugin-daemonset-nsxpz" [c35f7718-6879-4edb-9a8b-5b4a82ad2a7c] Running
	I0906 18:30:57.373252   13823 system_pods.go:89] "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:57.373257   13823 system_pods.go:89] "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:57.373264   13823 system_pods.go:89] "snapshot-controller-56fcc65765-db2j5" [afcb8d14-41d7-444b-b16d-496ca520ee39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.373273   13823 system_pods.go:89] "snapshot-controller-56fcc65765-jjdrv" [d3df181f-bfa3-4ef4-9767-ecc84c335cc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.373280   13823 system_pods.go:89] "storage-provisioner" [a837ebf7-7140-4baa-8b93-ea556996b204] Running
	I0906 18:30:57.373287   13823 system_pods.go:89] "tiller-deploy-b48cc5f79-d2ggh" [5951b042-9892-4eb8-b567-933475c4a163] Running
	I0906 18:30:57.373299   13823 system_pods.go:126] duration metric: took 9.109597ms to wait for k8s-apps to be running ...
	I0906 18:30:57.373309   13823 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:30:57.373355   13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:30:57.425478   13823 system_svc.go:56] duration metric: took 52.162346ms WaitForService to wait for kubelet
	I0906 18:30:57.425503   13823 kubeadm.go:582] duration metric: took 39.586136805s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:30:57.425533   13823 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:30:57.428818   13823 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:30:57.428842   13823 node_conditions.go:123] node cpu capacity is 2
	I0906 18:30:57.428863   13823 node_conditions.go:105] duration metric: took 3.314164ms to run NodePressure ...
	I0906 18:30:57.428878   13823 start.go:241] waiting for startup goroutines ...
	I0906 18:30:57.495273   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.495869   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.769593   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:57.798564   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.995122   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.995468   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.270153   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:58.299032   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.495028   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.495638   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.770199   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:58.797952   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.994635   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.995409   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.269612   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:59.298532   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.494666   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.495202   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:59.769637   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:59.799716   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.995110   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.997059   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.269925   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:00.299168   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.495168   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.495452   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.769831   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:00.798879   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.994356   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.995338   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:01.270323   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:01.298809   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.497749   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:01.509994   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.196171   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:02.197232   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.197446   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.198219   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.269772   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:02.299913   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.495441   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.496083   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.770038   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:02.800728   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.995143   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.995393   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.269175   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:03.298453   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.495672   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.495941   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.769214   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:03.798100   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.996193   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.996547   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.270229   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:04.300339   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.495048   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:04.495208   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.769698   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:04.798488   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.000395   13823 kapi.go:107] duration metric: took 37.509684094s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 18:31:05.000674   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.270104   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:05.297638   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.495343   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.770543   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:05.800954   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.994937   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.270489   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:06.299401   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.495523   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.775824   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:06.804605   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.000907   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.281094   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:07.306915   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.818623   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:07.820944   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.821122   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.994968   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.269992   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:08.298837   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.493945   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.769482   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:08.798377   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.994691   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.269835   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:09.299230   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.502957   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.769997   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:09.798765   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.127650   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.275919   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:10.300104   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.495617   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.769823   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:10.798656   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.995288   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.270073   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:11.299546   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.494131   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.771059   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:11.799920   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.995856   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.274737   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:12.299392   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.494262   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.769625   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:12.798619   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.995358   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.316812   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:13.317852   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.495815   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.769181   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:13.799259   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.995199   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.276613   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:14.379012   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.494898   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.770331   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:14.798773   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.995445   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.272540   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:15.301141   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.495285   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.770353   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:15.798730   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.994520   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.270657   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:16.300620   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.494263   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.770371   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:16.799256   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.994749   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.269747   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:17.298951   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.494719   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.769832   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:17.799470   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.994977   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.269720   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:18.310969   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.494867   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.769348   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:18.798225   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.994850   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.282829   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:19.384038   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.497045   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.770599   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:19.801611   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.996550   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.270037   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:20.311775   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.498768   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.769965   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:20.799204   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.997161   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.270035   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:21.299010   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.494660   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.769290   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:21.798619   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.994674   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.269883   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:22.300295   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.496723   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.771097   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:22.799152   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.013066   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.270485   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:23.299028   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.496372   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.770017   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:23.801362   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.996357   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:24.270445   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:24.299776   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:24.494072   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.030314   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:25.030783   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.031442   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.269910   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:25.371610   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.494715   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.770973   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:25.799735   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.994854   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.270976   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:26.299500   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.494510   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.770729   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:26.873976   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.993699   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.269916   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:27.299203   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:27.494353   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.771154   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:27.798428   13823 kapi.go:107] duration metric: took 58.504619679s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 18:31:27.996381   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.271088   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:28.493970   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.769758   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:28.994788   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.271720   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:29.496574   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.770127   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:29.994752   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.464639   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:30.495124   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.770101   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:30.995408   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.270144   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:31.495730   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.769464   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:31.996345   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.269861   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:32.495930   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.768939   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:32.996483   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.269235   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:33.494459   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.769303   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:33.994740   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.270162   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:34.494209   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.772239   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:34.995450   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.270037   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:35.494858   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.770518   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:35.994084   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.270405   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:36.496230   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.770326   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:36.994330   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.270147   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:37.493620   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.778857   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:38.113592   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.270475   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:38.494284   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.769614   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:39.006516   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:39.273731   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:39.495548   13823 kapi.go:107] duration metric: took 1m12.005524271s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 18:31:39.770852   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:40.269133   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:40.769688   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:41.270179   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:41.769459   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:42.270714   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:42.770252   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:43.270294   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:43.770209   13823 kapi.go:107] duration metric: took 1m12.504314576s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 18:31:43.771902   13823 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-959832 cluster.
	I0906 18:31:43.773493   13823 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 18:31:43.774994   13823 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 18:31:43.776439   13823 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, default-storageclass, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0906 18:31:43.778228   13823 addons.go:510] duration metric: took 1m25.938813235s for enable addons: enabled=[storage-provisioner ingress-dns default-storageclass nvidia-device-plugin cloud-spanner metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0906 18:31:43.778280   13823 start.go:246] waiting for cluster config update ...
	I0906 18:31:43.778303   13823 start.go:255] writing updated cluster config ...
	I0906 18:31:43.778560   13823 ssh_runner.go:195] Run: rm -f paused
	I0906 18:31:43.828681   13823 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 18:31:43.830792   13823 out.go:177] * Done! kubectl is now configured to use "addons-959832" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.477681804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648059477605049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c0297f3-fe00-405e-9ba7-b7639a9f70e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.478519982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab2efd6c-1690-40d4-b3cd-25cee8b1e629 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.478623655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab2efd6c-1690-40d4-b3cd-25cee8b1e629 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.479015594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:751a19a588218de05376aea0383786ab3c8c10132343d3fa939969f20168d47a,PodSandboxId:9eff610caae62c68fd5df308e75d93b0e306aaedc003aa13c5175206cd50d82e,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648044525071065,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ca6d482b-e311-418b-b2d8-b7dd38238386,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0033d69fcbd8e6d154c6229031ce690f9d53fc4de18acfc56a9100ab87063d8f,PodSandboxId:8b1ac3c44a7956fdba07d51c1dd11cf7d5ab97999d70bc46150eeabb8f26970f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725648041683592210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754a36f2-796a-43db-86bb-d5a98787bdac,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64bc8797628e87ff6d7db9bb03163065fc3cef5deaf292daf56f7f1723e79f0c,PodSandboxId:9177865f139ac637274428a14fa3e86411a8a8eb1ae2a167bc45e453e2ab1270,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648037698509771,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 99b323c2-294b-40f3-9308-37241d2e4d94,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.po
rts: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e,PodSandboxId:8b8b62d5172cf7631d6c383bf5bb62c7aca55268e507ef69f63a5cd2e24ef15c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725647498764868534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-5z4xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 834d08fb-b9a8-4a67-b022-fec07c4b5fa9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f
3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee8
69b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b
066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86606ac7f428d65be26e62d10b92b19fccc1a4f6c65aad
8d580fce58b25aa967,PodSandboxId:41aeff34f5a9ca0decd72d59cef3929fc44a2fac7245c5db7552b7d585c380c4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725647455743253120,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-nsxpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35f7718-6879-4edb-9a8b-5b4a82ad2a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:9b38efef5174e5e3049f34f60a96316e51b7dfe1598d0e18c65e07207af2ee1a,PodSandboxId:94957bf19e8b18bcb9321523886280255160384204f3a5f1ea91beff0eb6021b,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725647446920084288,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-zh76q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79327e55-0b23-469f-bdc9-0611cfa8a848,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218,PodSandboxId:2131ffc93d2dbdf77608df2a3747aa930cf8f0c284b8bab57c8e919f3295247a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725647435717081953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1673a19c-a4a9-4d9d-bda1-e073fb44b3d8,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-4
2b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019
743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab2efd6c-1690-40d4-b3cd-25cee8b1e629 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.512966847Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7eb61c96-0409-4aa9-b9ac-aa3f16b7fc79 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.513059301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7eb61c96-0409-4aa9-b9ac-aa3f16b7fc79 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.513970984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3bd424b9-9df4-40ff-bdb0-d2af34377049 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.515153140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648059515121423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3bd424b9-9df4-40ff-bdb0-d2af34377049 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.515690188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af4c0026-ba03-433a-b347-01633417c3a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.515745740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af4c0026-ba03-433a-b347-01633417c3a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.516591018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:751a19a588218de05376aea0383786ab3c8c10132343d3fa939969f20168d47a,PodSandboxId:9eff610caae62c68fd5df308e75d93b0e306aaedc003aa13c5175206cd50d82e,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648044525071065,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ca6d482b-e311-418b-b2d8-b7dd38238386,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0033d69fcbd8e6d154c6229031ce690f9d53fc4de18acfc56a9100ab87063d8f,PodSandboxId:8b1ac3c44a7956fdba07d51c1dd11cf7d5ab97999d70bc46150eeabb8f26970f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725648041683592210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754a36f2-796a-43db-86bb-d5a98787bdac,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64bc8797628e87ff6d7db9bb03163065fc3cef5deaf292daf56f7f1723e79f0c,PodSandboxId:9177865f139ac637274428a14fa3e86411a8a8eb1ae2a167bc45e453e2ab1270,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648037698509771,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 99b323c2-294b-40f3-9308-37241d2e4d94,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.po
rts: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e,PodSandboxId:8b8b62d5172cf7631d6c383bf5bb62c7aca55268e507ef69f63a5cd2e24ef15c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725647498764868534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-5z4xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 834d08fb-b9a8-4a67-b022-fec07c4b5fa9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f
3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee8
69b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b
066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86606ac7f428d65be26e62d10b92b19fccc1a4f6c65aad
8d580fce58b25aa967,PodSandboxId:41aeff34f5a9ca0decd72d59cef3929fc44a2fac7245c5db7552b7d585c380c4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725647455743253120,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-nsxpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35f7718-6879-4edb-9a8b-5b4a82ad2a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:9b38efef5174e5e3049f34f60a96316e51b7dfe1598d0e18c65e07207af2ee1a,PodSandboxId:94957bf19e8b18bcb9321523886280255160384204f3a5f1ea91beff0eb6021b,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725647446920084288,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-zh76q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79327e55-0b23-469f-bdc9-0611cfa8a848,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218,PodSandboxId:2131ffc93d2dbdf77608df2a3747aa930cf8f0c284b8bab57c8e919f3295247a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725647435717081953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1673a19c-a4a9-4d9d-bda1-e073fb44b3d8,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-4
2b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019
743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af4c0026-ba03-433a-b347-01633417c3a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.569539729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd75c494-b8e8-41a9-a9f0-4f4a5108f674 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.569629524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd75c494-b8e8-41a9-a9f0-4f4a5108f674 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.570645034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=685e3c2d-fb87-4633-8f49-c6cb353f8ba8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.571745517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648059571718376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=685e3c2d-fb87-4633-8f49-c6cb353f8ba8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.572415648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fd8e417-63d1-4f69-b718-145980fd9dbc name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.572561425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fd8e417-63d1-4f69-b718-145980fd9dbc name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.572983807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:751a19a588218de05376aea0383786ab3c8c10132343d3fa939969f20168d47a,PodSandboxId:9eff610caae62c68fd5df308e75d93b0e306aaedc003aa13c5175206cd50d82e,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648044525071065,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ca6d482b-e311-418b-b2d8-b7dd38238386,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0033d69fcbd8e6d154c6229031ce690f9d53fc4de18acfc56a9100ab87063d8f,PodSandboxId:8b1ac3c44a7956fdba07d51c1dd11cf7d5ab97999d70bc46150eeabb8f26970f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725648041683592210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754a36f2-796a-43db-86bb-d5a98787bdac,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64bc8797628e87ff6d7db9bb03163065fc3cef5deaf292daf56f7f1723e79f0c,PodSandboxId:9177865f139ac637274428a14fa3e86411a8a8eb1ae2a167bc45e453e2ab1270,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648037698509771,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 99b323c2-294b-40f3-9308-37241d2e4d94,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.po
rts: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e,PodSandboxId:8b8b62d5172cf7631d6c383bf5bb62c7aca55268e507ef69f63a5cd2e24ef15c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725647498764868534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-5z4xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 834d08fb-b9a8-4a67-b022-fec07c4b5fa9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f
3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee8
69b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b
066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86606ac7f428d65be26e62d10b92b19fccc1a4f6c65aad
8d580fce58b25aa967,PodSandboxId:41aeff34f5a9ca0decd72d59cef3929fc44a2fac7245c5db7552b7d585c380c4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725647455743253120,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-nsxpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35f7718-6879-4edb-9a8b-5b4a82ad2a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:9b38efef5174e5e3049f34f60a96316e51b7dfe1598d0e18c65e07207af2ee1a,PodSandboxId:94957bf19e8b18bcb9321523886280255160384204f3a5f1ea91beff0eb6021b,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725647446920084288,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-zh76q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79327e55-0b23-469f-bdc9-0611cfa8a848,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218,PodSandboxId:2131ffc93d2dbdf77608df2a3747aa930cf8f0c284b8bab57c8e919f3295247a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725647435717081953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1673a19c-a4a9-4d9d-bda1-e073fb44b3d8,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-4
2b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019
743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fd8e417-63d1-4f69-b718-145980fd9dbc name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.606084604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bb0d2a1-f8b0-4793-9ac6-a6ed250e72fc name=/runtime.v1.RuntimeService/Version
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.606170783Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bb0d2a1-f8b0-4793-9ac6-a6ed250e72fc name=/runtime.v1.RuntimeService/Version
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.607185958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57c344d6-c86b-4ebe-993e-399dabdc4573 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.613546938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648059613514131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57c344d6-c86b-4ebe-993e-399dabdc4573 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.614546038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef8cfa00-b837-4681-b934-6c9df95c444f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.614608199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef8cfa00-b837-4681-b934-6c9df95c444f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:40:59 addons-959832 crio[670]: time="2024-09-06 18:40:59.615331637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:751a19a588218de05376aea0383786ab3c8c10132343d3fa939969f20168d47a,PodSandboxId:9eff610caae62c68fd5df308e75d93b0e306aaedc003aa13c5175206cd50d82e,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648044525071065,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: ca6d482b-e311-418b-b2d8-b7dd38238386,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0033d69fcbd8e6d154c6229031ce690f9d53fc4de18acfc56a9100ab87063d8f,PodSandboxId:8b1ac3c44a7956fdba07d51c1dd11cf7d5ab97999d70bc46150eeabb8f26970f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:87ff76f62d367950186bde563642e39208c0e2b4afc833b4b3b01b8fef60ae9e,State:CONTAINER_EXITED,CreatedAt:1725648041683592210,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 754a36f2-796a-43db-86bb-d5a98787bdac,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64bc8797628e87ff6d7db9bb03163065fc3cef5deaf292daf56f7f1723e79f0c,PodSandboxId:9177865f139ac637274428a14fa3e86411a8a8eb1ae2a167bc45e453e2ab1270,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1725648037698509771,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 99b323c2-294b-40f3-9308-37241d2e4d94,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.po
rts: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annota
tions:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e,PodSandboxId:8b8b62d5172cf7631d6c383bf5bb62c7aca55268e507ef69f63a5cd2e24ef15c,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1725647498764868534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996
ff-5z4xh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 834d08fb-b9a8-4a67-b022-fec07c4b5fa9,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f
3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee8
69b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b
066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86606ac7f428d65be26e62d10b92b19fccc1a4f6c65aad
8d580fce58b25aa967,PodSandboxId:41aeff34f5a9ca0decd72d59cef3929fc44a2fac7245c5db7552b7d585c380c4,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1725647455743253120,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-nsxpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c35f7718-6879-4edb-9a8b-5b4a82ad2a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Cont
ainer{Id:9b38efef5174e5e3049f34f60a96316e51b7dfe1598d0e18c65e07207af2ee1a,PodSandboxId:94957bf19e8b18bcb9321523886280255160384204f3a5f1ea91beff0eb6021b,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1725647446920084288,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-zh76q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 79327e55-0b23-469f-bdc9-0611cfa8a848,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],i
o.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218,PodSandboxId:2131ffc93d2dbdf77608df2a3747aa930cf8f0c284b8bab57c8e919f3295247a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1725647435717081953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1673a19c-a4a9-4d9d-bda1-e073fb44b3d8,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-4
2b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019
743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[str
ing]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.c
ontainer.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kube
rnetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef8cfa00-b837-4681-b934-6c9df95c444f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	751a19a588218       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             15 seconds ago      Exited              helper-pod                 0                   9eff610caae62       helper-pod-delete-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9
	0033d69fcbd8e       docker.io/library/busybox@sha256:1f3c4ec00c804f65805bd22b358c8fbba6b0ab4e32171adba33058cf635923aa                            18 seconds ago      Exited              busybox                    0                   8b1ac3c44a795       test-local-path
	64bc8797628e8       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                            21 seconds ago      Exited              helper-pod                 0                   9177865f139ac       helper-pod-create-pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9
	47ff4cd5a2010       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              44 seconds ago      Running             nginx                      0                   e9d551110687a       nginx
	bff22acf8afe6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                   0                   6009e3b23d6b9       gcp-auth-89d5ffd79-wbp4z
	2f6f132825107       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                 0                   8b8b62d5172cf       ingress-nginx-controller-bc57996ff-5z4xh
	b9dae7d0e5426       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             9 minutes ago       Exited              patch                      2                   09518ced68465       ingress-nginx-admission-patch-h6cwj
	f1aec73f0b154       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                     0                   d305340c16851       ingress-nginx-admission-create-gbh5k
	dbdca73cd5f41       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        9 minutes ago       Running             metrics-server             0                   ebd17a7bfd07d       metrics-server-84c5f94fbc-flnx5
	d8e6b5740dfd9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             10 minutes ago      Running             local-path-provisioner     0                   bb57b9b0a87b0       local-path-provisioner-86d989889c-wmllc
	86606ac7f428d       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   41aeff34f5a9c       nvidia-device-plugin-daemonset-nsxpz
	9b38efef5174e       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               10 minutes ago      Running             cloud-spanner-emulator     0                   94957bf19e8b1       cloud-spanner-emulator-769b77f747-zh76q
	3be35f5c5847b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns       0                   2131ffc93d2db       kube-ingress-dns-minikube
	095caffa96df4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner        0                   fb03fe115a315       storage-provisioner
	daf771eda93ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             10 minutes ago      Running             coredns                    0                   cf16f9b0ce0a6       coredns-6f6b679f8f-d5d26
	f62f176bebb98       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             10 minutes ago      Running             kube-proxy                 0                   a16d4e27651e7       kube-proxy-df5wg
	0976f654c6450       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             10 minutes ago      Running             kube-controller-manager    0                   08d02ee1f1b83       kube-controller-manager-addons-959832
	0062bd6dff511       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             10 minutes ago      Running             kube-scheduler             0                   3810e200d7f2c       kube-scheduler-addons-959832
	14011f30e4b49       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                       0                   1340e66e90fd2       etcd-addons-959832
	f03b3137e10ab       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             10 minutes ago      Running             kube-apiserver             0                   6a4a01ed6ac27       kube-apiserver-addons-959832
	
	
	==> coredns [daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025] <==
	[INFO] 10.244.0.8:53109 - 30493 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000031299s
	[INFO] 10.244.0.8:51164 - 21323 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000073777s
	[INFO] 10.244.0.8:51164 - 9807 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00003634s
	[INFO] 10.244.0.8:33912 - 61080 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030797s
	[INFO] 10.244.0.8:33912 - 53146 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000256s
	[INFO] 10.244.0.8:51671 - 8759 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027086s
	[INFO] 10.244.0.8:51671 - 2357 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069078s
	[INFO] 10.244.0.8:58937 - 47939 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000029815s
	[INFO] 10.244.0.8:58937 - 55677 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000025038s
	[INFO] 10.244.0.8:59574 - 33097 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000055434s
	[INFO] 10.244.0.8:59574 - 49222 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000032883s
	[INFO] 10.244.0.8:34345 - 33033 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025905s
	[INFO] 10.244.0.8:34345 - 61711 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025782s
	[INFO] 10.244.0.8:40854 - 19935 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000024436s
	[INFO] 10.244.0.8:40854 - 16861 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000022079s
	[INFO] 10.244.0.8:54975 - 41823 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033452s
	[INFO] 10.244.0.8:54975 - 6745 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041358s
	[INFO] 10.244.0.22:39608 - 5840 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000623407s
	[INFO] 10.244.0.22:47451 - 10373 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000773196s
	[INFO] 10.244.0.22:47147 - 43920 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096203s
	[INFO] 10.244.0.22:37201 - 19027 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000052062s
	[INFO] 10.244.0.22:51583 - 38377 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070102s
	[INFO] 10.244.0.22:37854 - 16491 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000049501s
	[INFO] 10.244.0.22:55914 - 7247 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000846443s
	[INFO] 10.244.0.22:51764 - 46657 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001169257s
	
	
	==> describe nodes <==
	Name:               addons-959832
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-959832
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=addons-959832
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_30_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-959832
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:30:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-959832
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:40:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:40:46 +0000   Fri, 06 Sep 2024 18:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:40:46 +0000   Fri, 06 Sep 2024 18:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:40:46 +0000   Fri, 06 Sep 2024 18:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:40:46 +0000   Fri, 06 Sep 2024 18:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    addons-959832
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 789fcfcd81af4b61a593ac3d592db28c
	  System UUID:                789fcfcd-81af-4b61-a593-ac3d592db28c
	  Boot ID:                    ca224247-03d2-489f-a0b8-0a2fbb84d9da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-zh76q     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  gcp-auth                    gcp-auth-89d5ffd79-wbp4z                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-5z4xh    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-d5d26                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-959832                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-959832                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-959832       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-df5wg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-959832                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-flnx5             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 nvidia-device-plugin-daemonset-nsxpz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-86d989889c-wmllc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-959832 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-959832 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-959832 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-959832 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-959832 event: Registered Node addons-959832 in Controller
	
	
	==> dmesg <==
	[  +5.073317] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.127265] kauditd_printk_skb: 76 callbacks suppressed
	[  +6.787602] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.808461] kauditd_printk_skb: 34 callbacks suppressed
	[Sep 6 18:31] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.023954] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.411470] kauditd_printk_skb: 60 callbacks suppressed
	[  +6.032630] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.000760] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.371405] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.464629] kauditd_printk_skb: 42 callbacks suppressed
	[  +9.171733] kauditd_printk_skb: 9 callbacks suppressed
	[Sep 6 18:32] kauditd_printk_skb: 30 callbacks suppressed
	[Sep 6 18:34] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:39] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:40] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.061671] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.069446] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.609090] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.878882] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.370924] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.422494] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.580656] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.557034] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d] <==
	{"level":"info","ts":"2024-09-06T18:31:25.014738Z","caller":"traceutil/trace.go:171","msg":"trace[827224904] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1082; }","duration":"258.225918ms","start":"2024-09-06T18:31:24.756506Z","end":"2024-09-06T18:31:25.014732Z","steps":["trace[827224904] 'range keys from in-memory index tree'  (duration: 258.150808ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:25.014813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"230.47361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:25.014826Z","caller":"traceutil/trace.go:171","msg":"trace[654326182] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1082; }","duration":"230.487781ms","start":"2024-09-06T18:31:24.784334Z","end":"2024-09-06T18:31:25.014822Z","steps":["trace[654326182] 'range keys from in-memory index tree'  (duration: 230.413178ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:30.448893Z","caller":"traceutil/trace.go:171","msg":"trace[336175103] linearizableReadLoop","detail":"{readStateIndex:1140; appliedIndex:1139; }","duration":"193.667597ms","start":"2024-09-06T18:31:30.255210Z","end":"2024-09-06T18:31:30.448878Z","steps":["trace[336175103] 'read index received'  (duration: 193.5747ms)","trace[336175103] 'applied index is now lower than readState.Index'  (duration: 92.313µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:31:30.449052Z","caller":"traceutil/trace.go:171","msg":"trace[147865116] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"195.658402ms","start":"2024-09-06T18:31:30.253384Z","end":"2024-09-06T18:31:30.449042Z","steps":["trace[147865116] 'process raft request'  (duration: 195.381086ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:30.449255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.027216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:30.449308Z","caller":"traceutil/trace.go:171","msg":"trace[1936020184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"194.091492ms","start":"2024-09-06T18:31:30.255208Z","end":"2024-09-06T18:31:30.449299Z","steps":["trace[1936020184] 'agreement among raft nodes before linearized reading'  (duration: 194.016579ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:38.095195Z","caller":"traceutil/trace.go:171","msg":"trace[688394279] linearizableReadLoop","detail":"{readStateIndex:1162; appliedIndex:1161; }","duration":"115.853572ms","start":"2024-09-06T18:31:37.979325Z","end":"2024-09-06T18:31:38.095179Z","steps":["trace[688394279] 'read index received'  (duration: 115.687137ms)","trace[688394279] 'applied index is now lower than readState.Index'  (duration: 165.625µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-06T18:31:38.095479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.064057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:38.095541Z","caller":"traceutil/trace.go:171","msg":"trace[1813618553] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1130; }","duration":"116.211558ms","start":"2024-09-06T18:31:37.979321Z","end":"2024-09-06T18:31:38.095532Z","steps":["trace[1813618553] 'agreement among raft nodes before linearized reading'  (duration: 116.005384ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:38.095837Z","caller":"traceutil/trace.go:171","msg":"trace[2080125568] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"147.639748ms","start":"2024-09-06T18:31:37.948183Z","end":"2024-09-06T18:31:38.095822Z","steps":["trace[2080125568] 'process raft request'  (duration: 146.880754ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:42.416683Z","caller":"traceutil/trace.go:171","msg":"trace[91810177] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"156.247568ms","start":"2024-09-06T18:31:42.260415Z","end":"2024-09-06T18:31:42.416663Z","steps":["trace[91810177] 'process raft request'  (duration: 155.748211ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:40:07.229181Z","caller":"traceutil/trace.go:171","msg":"trace[484312089] linearizableReadLoop","detail":"{readStateIndex:2159; appliedIndex:2158; }","duration":"409.788256ms","start":"2024-09-06T18:40:06.819346Z","end":"2024-09-06T18:40:07.229135Z","steps":["trace[484312089] 'read index received'  (duration: 409.628912ms)","trace[484312089] 'applied index is now lower than readState.Index'  (duration: 158.846µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:40:07.229379Z","caller":"traceutil/trace.go:171","msg":"trace[1656832041] transaction","detail":"{read_only:false; response_revision:2017; number_of_response:1; }","duration":"491.002048ms","start":"2024-09-06T18:40:06.738356Z","end":"2024-09-06T18:40:07.229358Z","steps":["trace[1656832041] 'process raft request'  (duration: 490.652338ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:40:07.229604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.584673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:40:07.229643Z","caller":"traceutil/trace.go:171","msg":"trace[1915074209] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2017; }","duration":"248.626111ms","start":"2024-09-06T18:40:06.981009Z","end":"2024-09-06T18:40:07.229635Z","steps":["trace[1915074209] 'agreement among raft nodes before linearized reading'  (duration: 248.574709ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:40:07.229740Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:40:06.738339Z","time spent":"491.264052ms","remote":"127.0.0.1:39516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-959832\" mod_revision:1958 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-959832\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-959832\" > >"}
	{"level":"warn","ts":"2024-09-06T18:40:07.229558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"410.139686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-06T18:40:07.229900Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.345839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-06T18:40:07.229941Z","caller":"traceutil/trace.go:171","msg":"trace[1213588532] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2017; }","duration":"183.385298ms","start":"2024-09-06T18:40:07.046548Z","end":"2024-09-06T18:40:07.229933Z","steps":["trace[1213588532] 'agreement among raft nodes before linearized reading'  (duration: 183.300185ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:40:07.229918Z","caller":"traceutil/trace.go:171","msg":"trace[1459748069] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2017; }","duration":"410.570505ms","start":"2024-09-06T18:40:06.819339Z","end":"2024-09-06T18:40:07.229910Z","steps":["trace[1459748069] 'agreement among raft nodes before linearized reading'  (duration: 410.06832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:40:07.230002Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:40:06.819307Z","time spent":"410.688119ms","remote":"127.0.0.1:39260","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-09-06T18:40:09.281386Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1536}
	{"level":"info","ts":"2024-09-06T18:40:09.333184Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1536,"took":"51.266331ms","hash":4192817885,"current-db-size-bytes":6647808,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3444736,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-06T18:40:09.333251Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4192817885,"revision":1536,"compact-revision":-1}
	
	
	==> gcp-auth [bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961] <==
	2024/09/06 18:31:42 GCP Auth Webhook started!
	2024/09/06 18:31:43 Ready to marshal response ...
	2024/09/06 18:31:43 Ready to write response ...
	2024/09/06 18:31:44 Ready to marshal response ...
	2024/09/06 18:31:44 Ready to write response ...
	2024/09/06 18:31:44 Ready to marshal response ...
	2024/09/06 18:31:44 Ready to write response ...
	2024/09/06 18:39:57 Ready to marshal response ...
	2024/09/06 18:39:57 Ready to write response ...
	2024/09/06 18:40:01 Ready to marshal response ...
	2024/09/06 18:40:01 Ready to write response ...
	2024/09/06 18:40:03 Ready to marshal response ...
	2024/09/06 18:40:03 Ready to write response ...
	2024/09/06 18:40:12 Ready to marshal response ...
	2024/09/06 18:40:12 Ready to write response ...
	2024/09/06 18:40:20 Ready to marshal response ...
	2024/09/06 18:40:20 Ready to write response ...
	2024/09/06 18:40:36 Ready to marshal response ...
	2024/09/06 18:40:36 Ready to write response ...
	2024/09/06 18:40:36 Ready to marshal response ...
	2024/09/06 18:40:36 Ready to write response ...
	2024/09/06 18:40:43 Ready to marshal response ...
	2024/09/06 18:40:43 Ready to write response ...
	
	
	==> kernel <==
	 18:40:59 up 11 min,  0 users,  load average: 1.99, 1.15, 0.69
	Linux addons-959832 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9] <==
	E0906 18:32:14.711040       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
	W0906 18:32:14.711528       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 18:32:14.711932       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0906 18:32:14.714123       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
	E0906 18:32:14.719474       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
	I0906 18:32:14.784984       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0906 18:39:53.218243       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0906 18:39:54.261305       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0906 18:40:11.987036       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0906 18:40:12.163983       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.110.216"}
	I0906 18:40:13.051545       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0906 18:40:35.983222       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:35.983535       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:40:36.005118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:36.005246       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:40:36.035687       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:36.035737       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:40:36.054186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:36.054461       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0906 18:40:37.036569       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 18:40:37.057021       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0906 18:40:37.073802       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49] <==
	W0906 18:40:40.482396       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:40:40.482556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:40:40.938352       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:40:40.938413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:40:41.124392       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:40:41.124488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:40:45.723588       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:40:45.723666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:40:46.118637       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:40:46.118749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:40:46.223772       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:40:46.223806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:40:46.739616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-959832"
	I0906 18:40:47.681013       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0906 18:40:47.681115       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 18:40:48.246986       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0906 18:40:48.247095       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 18:40:49.916821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="3.445µs"
	W0906 18:40:52.477004       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:40:52.477049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:40:52.616550       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:40:52.616601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:40:54.489507       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:40:54.489648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:40:58.454614       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="3.229µs"
	
	
	==> kube-proxy [f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:30:20.895600       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:30:20.905684       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.98"]
	E0906 18:30:20.905767       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:30:20.981385       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:30:20.981522       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:30:20.981552       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:30:20.986309       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:30:20.986680       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:30:20.986707       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:30:20.988245       1 config.go:197] "Starting service config controller"
	I0906 18:30:20.988269       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:30:20.988299       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:30:20.988303       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:30:20.988869       1 config.go:326] "Starting node config controller"
	I0906 18:30:20.988881       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:30:21.089002       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:30:21.089043       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:30:21.089077       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832] <==
	W0906 18:30:10.632826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:10.632881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:10.632992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:30:10.633043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:10.633145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 18:30:10.633198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:10.633303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:30:10.633365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.559856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 18:30:11.559915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.591626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:11.591724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.593014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:30:11.593712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.624825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 18:30:11.625533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.640090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:11.640140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.646831       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 18:30:11.646890       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0906 18:30:11.875922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 18:30:11.876131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.954173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 18:30:11.954234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0906 18:30:14.512534       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:40:51 addons-959832 kubelet[1215]: I0906 18:40:51.346723    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac00c1c-d26e-4f08-b91c-49baa60d8def" path="/var/lib/kubelet/pods/7ac00c1c-d26e-4f08-b91c-49baa60d8def/volumes"
	Sep 06 18:40:53 addons-959832 kubelet[1215]: E0906 18:40:53.340174    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3"
	Sep 06 18:40:53 addons-959832 kubelet[1215]: E0906 18:40:53.765407    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648053764681386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:40:53 addons-959832 kubelet[1215]: E0906 18:40:53.765652    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648053764681386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:557332,},InodesUsed:&UInt64Value{Value:190,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.104841    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-gcp-creds\") pod \"ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3\" (UID: \"ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3\") "
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.104905    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9gwq\" (UniqueName: \"kubernetes.io/projected/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-kube-api-access-v9gwq\") pod \"ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3\" (UID: \"ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3\") "
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.105269    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3" (UID: "ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.120998    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-kube-api-access-v9gwq" (OuterVolumeSpecName: "kube-api-access-v9gwq") pod "ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3" (UID: "ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3"). InnerVolumeSpecName "kube-api-access-v9gwq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.205694    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-v9gwq\" (UniqueName: \"kubernetes.io/projected/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-kube-api-access-v9gwq\") on node \"addons-959832\" DevicePath \"\""
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.205735    1215 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3-gcp-creds\") on node \"addons-959832\" DevicePath \"\""
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.811545    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7khw\" (UniqueName: \"kubernetes.io/projected/995000c4-356d-4aee-b8b4-6c719240ca26-kube-api-access-z7khw\") pod \"995000c4-356d-4aee-b8b4-6c719240ca26\" (UID: \"995000c4-356d-4aee-b8b4-6c719240ca26\") "
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.814539    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/995000c4-356d-4aee-b8b4-6c719240ca26-kube-api-access-z7khw" (OuterVolumeSpecName: "kube-api-access-z7khw") pod "995000c4-356d-4aee-b8b4-6c719240ca26" (UID: "995000c4-356d-4aee-b8b4-6c719240ca26"). InnerVolumeSpecName "kube-api-access-z7khw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.912620    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8hjv\" (UniqueName: \"kubernetes.io/projected/8ea39930-6a75-4ad5-a074-233a2b95f98f-kube-api-access-g8hjv\") pod \"8ea39930-6a75-4ad5-a074-233a2b95f98f\" (UID: \"8ea39930-6a75-4ad5-a074-233a2b95f98f\") "
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.912759    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z7khw\" (UniqueName: \"kubernetes.io/projected/995000c4-356d-4aee-b8b4-6c719240ca26-kube-api-access-z7khw\") on node \"addons-959832\" DevicePath \"\""
	Sep 06 18:40:58 addons-959832 kubelet[1215]: I0906 18:40:58.915027    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea39930-6a75-4ad5-a074-233a2b95f98f-kube-api-access-g8hjv" (OuterVolumeSpecName: "kube-api-access-g8hjv") pod "8ea39930-6a75-4ad5-a074-233a2b95f98f" (UID: "8ea39930-6a75-4ad5-a074-233a2b95f98f"). InnerVolumeSpecName "kube-api-access-g8hjv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.013788    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g8hjv\" (UniqueName: \"kubernetes.io/projected/8ea39930-6a75-4ad5-a074-233a2b95f98f-kube-api-access-g8hjv\") on node \"addons-959832\" DevicePath \"\""
	Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.344671    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3" path="/var/lib/kubelet/pods/ba1a2f6e-8b2f-490f-a7b0-0ce0e73ed7c3/volumes"
	Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.387322    1215 scope.go:117] "RemoveContainer" containerID="dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907"
	Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.435531    1215 scope.go:117] "RemoveContainer" containerID="dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907"
	Sep 06 18:40:59 addons-959832 kubelet[1215]: E0906 18:40:59.436043    1215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907\": container with ID starting with dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907 not found: ID does not exist" containerID="dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907"
	Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.436091    1215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907"} err="failed to get container status \"dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907\": rpc error: code = NotFound desc = could not find container \"dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907\": container with ID starting with dfc2e22543aa63aed56961248a143e0fb46785bfc504dffc3df1c6711c6da907 not found: ID does not exist"
	Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.436116    1215 scope.go:117] "RemoveContainer" containerID="4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28"
	Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.461988    1215 scope.go:117] "RemoveContainer" containerID="4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28"
	Sep 06 18:40:59 addons-959832 kubelet[1215]: E0906 18:40:59.462660    1215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28\": container with ID starting with 4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28 not found: ID does not exist" containerID="4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28"
	Sep 06 18:40:59 addons-959832 kubelet[1215]: I0906 18:40:59.462708    1215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28"} err="failed to get container status \"4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28\": rpc error: code = NotFound desc = could not find container \"4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28\": container with ID starting with 4613179581ecef2478766afce7cc408172e74a4ba40644a676229154ced15a28 not found: ID does not exist"
	
	
	==> storage-provisioner [095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120] <==
	I0906 18:30:26.339092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:30:26.364532       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:30:26.364614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:30:26.389908       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:30:26.390911       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3!
	I0906 18:30:26.391024       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c870ecaa-1488-487e-a063-0e518015e13e", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3 became leader
	I0906 18:30:26.492036       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-959832 -n addons-959832
helpers_test.go:261: (dbg) Run:  kubectl --context addons-959832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-gbh5k ingress-nginx-admission-patch-h6cwj
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-959832 describe pod busybox ingress-nginx-admission-create-gbh5k ingress-nginx-admission-patch-h6cwj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-959832 describe pod busybox ingress-nginx-admission-create-gbh5k ingress-nginx-admission-patch-h6cwj: exit status 1 (73.07959ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-959832/192.168.39.98
	Start Time:       Fri, 06 Sep 2024 18:31:44 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n8sxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n8sxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-959832
	  Normal   Pulling    7m48s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m37s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m15s (x20 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gbh5k" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h6cwj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-959832 describe pod busybox ingress-nginx-admission-create-gbh5k ingress-nginx-admission-patch-h6cwj: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.13s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (150.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-959832 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-959832 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-959832 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d21e1ab5-c3ed-4c03-9a60-7b9908550e31] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d21e1ab5-c3ed-4c03-9a60-7b9908550e31] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003493357s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-959832 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.665940609s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-959832 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.98
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-959832 addons disable ingress-dns --alsologtostderr -v=1: (1.680938518s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-959832 addons disable ingress --alsologtostderr -v=1: (7.715633625s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-959832 -n addons-959832
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-959832 logs -n 25: (1.346364588s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-693029                                                                     | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-726386                                                                     | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-693029                                                                     | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-071210 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | binary-mirror-071210                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42457                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-071210                                                                     | binary-mirror-071210 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-959832 --wait=true                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:39 UTC | 06 Sep 24 18:39 UTC |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-959832 ssh curl -s                                                                   | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-959832 addons                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-959832 addons                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-959832 ssh cat                                                                       | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | /opt/local-path-provisioner/pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-959832 ip                                                                            | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:41 UTC | 06 Sep 24 18:41 UTC |
	|         | -p addons-959832                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:41 UTC | 06 Sep 24 18:41 UTC |
	|         | -p addons-959832                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:41 UTC | 06 Sep 24 18:41 UTC |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:41 UTC | 06 Sep 24 18:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-959832 ip                                                                            | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:30.440394   13823 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:30.440643   13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:30.440652   13823 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:30.440656   13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:30.440824   13823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:29:30.441460   13823 out.go:352] Setting JSON to false
	I0906 18:29:30.442255   13823 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":719,"bootTime":1725646651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:30.442312   13823 start.go:139] virtualization: kvm guest
	I0906 18:29:30.444228   13823 out.go:177] * [addons-959832] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:29:30.445334   13823 notify.go:220] Checking for updates...
	I0906 18:29:30.445342   13823 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:29:30.446652   13823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:30.448060   13823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:29:30.449528   13823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:30.450779   13823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:29:30.451986   13823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:29:30.453700   13823 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:30.485465   13823 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 18:29:30.486701   13823 start.go:297] selected driver: kvm2
	I0906 18:29:30.486713   13823 start.go:901] validating driver "kvm2" against <nil>
	I0906 18:29:30.486727   13823 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:29:30.487397   13823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:30.487478   13823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 18:29:30.502694   13823 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 18:29:30.502738   13823 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:30.502931   13823 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:29:30.502959   13823 cni.go:84] Creating CNI manager for ""
	I0906 18:29:30.502966   13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:29:30.502978   13823 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:30.503026   13823 start.go:340] cluster config:
	{Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0906 18:29:30.503117   13823 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:30.504979   13823 out.go:177] * Starting "addons-959832" primary control-plane node in "addons-959832" cluster
	I0906 18:29:30.506126   13823 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:29:30.506168   13823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 18:29:30.506178   13823 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:30.506272   13823 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 18:29:30.506286   13823 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 18:29:30.506559   13823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json ...
	I0906 18:29:30.506577   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json: {Name:mkb043cbbb2997cf908fb60acd39795871d65137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:29:30.506698   13823 start.go:360] acquireMachinesLock for addons-959832: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:29:30.506741   13823 start.go:364] duration metric: took 31.601µs to acquireMachinesLock for "addons-959832"
	I0906 18:29:30.506759   13823 start.go:93] Provisioning new machine with config: &{Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:29:30.506820   13823 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 18:29:30.508432   13823 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 18:29:30.508550   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:29:30.508587   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:29:30.522987   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34483
	I0906 18:29:30.523384   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:29:30.523869   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:29:30.523890   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:29:30.524169   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:29:30.524345   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:30.524450   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:30.524591   13823 start.go:159] libmachine.API.Create for "addons-959832" (driver="kvm2")
	I0906 18:29:30.524624   13823 client.go:168] LocalClient.Create starting
	I0906 18:29:30.524668   13823 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 18:29:30.595679   13823 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 18:29:30.708441   13823 main.go:141] libmachine: Running pre-create checks...
	I0906 18:29:30.708464   13823 main.go:141] libmachine: (addons-959832) Calling .PreCreateCheck
	I0906 18:29:30.708957   13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
	I0906 18:29:30.709397   13823 main.go:141] libmachine: Creating machine...
	I0906 18:29:30.709410   13823 main.go:141] libmachine: (addons-959832) Calling .Create
	I0906 18:29:30.709556   13823 main.go:141] libmachine: (addons-959832) Creating KVM machine...
	I0906 18:29:30.710795   13823 main.go:141] libmachine: (addons-959832) DBG | found existing default KVM network
	I0906 18:29:30.711508   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:30.711378   13845 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0906 18:29:30.711570   13823 main.go:141] libmachine: (addons-959832) DBG | created network xml: 
	I0906 18:29:30.711607   13823 main.go:141] libmachine: (addons-959832) DBG | <network>
	I0906 18:29:30.711624   13823 main.go:141] libmachine: (addons-959832) DBG |   <name>mk-addons-959832</name>
	I0906 18:29:30.711646   13823 main.go:141] libmachine: (addons-959832) DBG |   <dns enable='no'/>
	I0906 18:29:30.711654   13823 main.go:141] libmachine: (addons-959832) DBG |   
	I0906 18:29:30.711661   13823 main.go:141] libmachine: (addons-959832) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0906 18:29:30.711668   13823 main.go:141] libmachine: (addons-959832) DBG |     <dhcp>
	I0906 18:29:30.711673   13823 main.go:141] libmachine: (addons-959832) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0906 18:29:30.711684   13823 main.go:141] libmachine: (addons-959832) DBG |     </dhcp>
	I0906 18:29:30.711691   13823 main.go:141] libmachine: (addons-959832) DBG |   </ip>
	I0906 18:29:30.711698   13823 main.go:141] libmachine: (addons-959832) DBG |   
	I0906 18:29:30.711706   13823 main.go:141] libmachine: (addons-959832) DBG | </network>
	I0906 18:29:30.711714   13823 main.go:141] libmachine: (addons-959832) DBG | 
	I0906 18:29:30.716914   13823 main.go:141] libmachine: (addons-959832) DBG | trying to create private KVM network mk-addons-959832 192.168.39.0/24...
	I0906 18:29:30.784502   13823 main.go:141] libmachine: (addons-959832) DBG | private KVM network mk-addons-959832 192.168.39.0/24 created
	I0906 18:29:30.784548   13823 main.go:141] libmachine: (addons-959832) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 ...
	I0906 18:29:30.784580   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:30.784495   13845 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:30.784596   13823 main.go:141] libmachine: (addons-959832) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:29:30.784621   13823 main.go:141] libmachine: (addons-959832) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:29:31.031605   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.031496   13845 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa...
	I0906 18:29:31.150285   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.150157   13845 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/addons-959832.rawdisk...
	I0906 18:29:31.150312   13823 main.go:141] libmachine: (addons-959832) DBG | Writing magic tar header
	I0906 18:29:31.150322   13823 main.go:141] libmachine: (addons-959832) DBG | Writing SSH key tar header
	I0906 18:29:31.150329   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.150306   13845 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 ...
	I0906 18:29:31.150514   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832
	I0906 18:29:31.150551   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 18:29:31.150582   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 (perms=drwx------)
	I0906 18:29:31.150604   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:29:31.150630   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 18:29:31.150652   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 18:29:31.150664   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:31.150681   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 18:29:31.150694   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:29:31.150709   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:29:31.150726   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:29:31.150738   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home
	I0906 18:29:31.150755   13823 main.go:141] libmachine: (addons-959832) DBG | Skipping /home - not owner
	I0906 18:29:31.150771   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:29:31.150781   13823 main.go:141] libmachine: (addons-959832) Creating domain...
	I0906 18:29:31.151641   13823 main.go:141] libmachine: (addons-959832) define libvirt domain using xml: 
	I0906 18:29:31.151668   13823 main.go:141] libmachine: (addons-959832) <domain type='kvm'>
	I0906 18:29:31.151680   13823 main.go:141] libmachine: (addons-959832)   <name>addons-959832</name>
	I0906 18:29:31.151693   13823 main.go:141] libmachine: (addons-959832)   <memory unit='MiB'>4000</memory>
	I0906 18:29:31.151703   13823 main.go:141] libmachine: (addons-959832)   <vcpu>2</vcpu>
	I0906 18:29:31.151718   13823 main.go:141] libmachine: (addons-959832)   <features>
	I0906 18:29:31.151723   13823 main.go:141] libmachine: (addons-959832)     <acpi/>
	I0906 18:29:31.151727   13823 main.go:141] libmachine: (addons-959832)     <apic/>
	I0906 18:29:31.151736   13823 main.go:141] libmachine: (addons-959832)     <pae/>
	I0906 18:29:31.151741   13823 main.go:141] libmachine: (addons-959832)     
	I0906 18:29:31.151747   13823 main.go:141] libmachine: (addons-959832)   </features>
	I0906 18:29:31.151754   13823 main.go:141] libmachine: (addons-959832)   <cpu mode='host-passthrough'>
	I0906 18:29:31.151759   13823 main.go:141] libmachine: (addons-959832)   
	I0906 18:29:31.151772   13823 main.go:141] libmachine: (addons-959832)   </cpu>
	I0906 18:29:31.151779   13823 main.go:141] libmachine: (addons-959832)   <os>
	I0906 18:29:31.151788   13823 main.go:141] libmachine: (addons-959832)     <type>hvm</type>
	I0906 18:29:31.151795   13823 main.go:141] libmachine: (addons-959832)     <boot dev='cdrom'/>
	I0906 18:29:31.151801   13823 main.go:141] libmachine: (addons-959832)     <boot dev='hd'/>
	I0906 18:29:31.151808   13823 main.go:141] libmachine: (addons-959832)     <bootmenu enable='no'/>
	I0906 18:29:31.151812   13823 main.go:141] libmachine: (addons-959832)   </os>
	I0906 18:29:31.151818   13823 main.go:141] libmachine: (addons-959832)   <devices>
	I0906 18:29:31.151825   13823 main.go:141] libmachine: (addons-959832)     <disk type='file' device='cdrom'>
	I0906 18:29:31.151834   13823 main.go:141] libmachine: (addons-959832)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/boot2docker.iso'/>
	I0906 18:29:31.151841   13823 main.go:141] libmachine: (addons-959832)       <target dev='hdc' bus='scsi'/>
	I0906 18:29:31.151847   13823 main.go:141] libmachine: (addons-959832)       <readonly/>
	I0906 18:29:31.151853   13823 main.go:141] libmachine: (addons-959832)     </disk>
	I0906 18:29:31.151859   13823 main.go:141] libmachine: (addons-959832)     <disk type='file' device='disk'>
	I0906 18:29:31.151867   13823 main.go:141] libmachine: (addons-959832)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:29:31.151878   13823 main.go:141] libmachine: (addons-959832)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/addons-959832.rawdisk'/>
	I0906 18:29:31.151886   13823 main.go:141] libmachine: (addons-959832)       <target dev='hda' bus='virtio'/>
	I0906 18:29:31.151894   13823 main.go:141] libmachine: (addons-959832)     </disk>
	I0906 18:29:31.151899   13823 main.go:141] libmachine: (addons-959832)     <interface type='network'>
	I0906 18:29:31.151908   13823 main.go:141] libmachine: (addons-959832)       <source network='mk-addons-959832'/>
	I0906 18:29:31.151915   13823 main.go:141] libmachine: (addons-959832)       <model type='virtio'/>
	I0906 18:29:31.151923   13823 main.go:141] libmachine: (addons-959832)     </interface>
	I0906 18:29:31.151931   13823 main.go:141] libmachine: (addons-959832)     <interface type='network'>
	I0906 18:29:31.151957   13823 main.go:141] libmachine: (addons-959832)       <source network='default'/>
	I0906 18:29:31.151984   13823 main.go:141] libmachine: (addons-959832)       <model type='virtio'/>
	I0906 18:29:31.151993   13823 main.go:141] libmachine: (addons-959832)     </interface>
	I0906 18:29:31.152008   13823 main.go:141] libmachine: (addons-959832)     <serial type='pty'>
	I0906 18:29:31.152028   13823 main.go:141] libmachine: (addons-959832)       <target port='0'/>
	I0906 18:29:31.152046   13823 main.go:141] libmachine: (addons-959832)     </serial>
	I0906 18:29:31.152059   13823 main.go:141] libmachine: (addons-959832)     <console type='pty'>
	I0906 18:29:31.152070   13823 main.go:141] libmachine: (addons-959832)       <target type='serial' port='0'/>
	I0906 18:29:31.152078   13823 main.go:141] libmachine: (addons-959832)     </console>
	I0906 18:29:31.152086   13823 main.go:141] libmachine: (addons-959832)     <rng model='virtio'>
	I0906 18:29:31.152095   13823 main.go:141] libmachine: (addons-959832)       <backend model='random'>/dev/random</backend>
	I0906 18:29:31.152103   13823 main.go:141] libmachine: (addons-959832)     </rng>
	I0906 18:29:31.152113   13823 main.go:141] libmachine: (addons-959832)     
	I0906 18:29:31.152126   13823 main.go:141] libmachine: (addons-959832)     
	I0906 18:29:31.152138   13823 main.go:141] libmachine: (addons-959832)   </devices>
	I0906 18:29:31.152148   13823 main.go:141] libmachine: (addons-959832) </domain>
	I0906 18:29:31.152161   13823 main.go:141] libmachine: (addons-959832) 
	I0906 18:29:31.158081   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:b5:f5:6a in network default
	I0906 18:29:31.158542   13823 main.go:141] libmachine: (addons-959832) Ensuring networks are active...
	I0906 18:29:31.158562   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:31.159097   13823 main.go:141] libmachine: (addons-959832) Ensuring network default is active
	I0906 18:29:31.159345   13823 main.go:141] libmachine: (addons-959832) Ensuring network mk-addons-959832 is active
	I0906 18:29:31.159767   13823 main.go:141] libmachine: (addons-959832) Getting domain xml...
	I0906 18:29:31.160314   13823 main.go:141] libmachine: (addons-959832) Creating domain...
	I0906 18:29:32.546282   13823 main.go:141] libmachine: (addons-959832) Waiting to get IP...
	I0906 18:29:32.547051   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:32.547580   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:32.547618   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:32.547518   13845 retry.go:31] will retry after 234.819193ms: waiting for machine to come up
	I0906 18:29:32.783988   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:32.784398   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:32.784420   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:32.784350   13845 retry.go:31] will retry after 374.097016ms: waiting for machine to come up
	I0906 18:29:33.159641   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:33.160076   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:33.160104   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:33.160024   13845 retry.go:31] will retry after 398.438198ms: waiting for machine to come up
	I0906 18:29:33.559453   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:33.559850   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:33.559879   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:33.559800   13845 retry.go:31] will retry after 513.667683ms: waiting for machine to come up
	I0906 18:29:34.075531   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:34.075976   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:34.076002   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:34.075937   13845 retry.go:31] will retry after 542.640322ms: waiting for machine to come up
	I0906 18:29:34.620767   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:34.621139   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:34.621164   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:34.621100   13845 retry.go:31] will retry after 952.553494ms: waiting for machine to come up
	I0906 18:29:35.575061   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:35.575519   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:35.575550   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:35.575475   13845 retry.go:31] will retry after 761.897484ms: waiting for machine to come up
	I0906 18:29:36.339380   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:36.339747   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:36.339775   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:36.339696   13845 retry.go:31] will retry after 1.058974587s: waiting for machine to come up
	I0906 18:29:37.399861   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:37.400184   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:37.400204   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:37.400146   13845 retry.go:31] will retry after 1.319275872s: waiting for machine to come up
	I0906 18:29:38.720600   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:38.721039   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:38.721065   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:38.720974   13845 retry.go:31] will retry after 1.544734383s: waiting for machine to come up
	I0906 18:29:40.267964   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:40.268338   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:40.268365   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:40.268303   13845 retry.go:31] will retry after 2.517498837s: waiting for machine to come up
	I0906 18:29:42.790192   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:42.790620   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:42.790646   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:42.790574   13845 retry.go:31] will retry after 2.829630462s: waiting for machine to come up
	I0906 18:29:45.621992   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:45.622542   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:45.622614   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:45.622535   13845 retry.go:31] will retry after 3.555249592s: waiting for machine to come up
	I0906 18:29:49.181782   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:49.182176   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:49.182199   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:49.182134   13845 retry.go:31] will retry after 4.155059883s: waiting for machine to come up
	I0906 18:29:53.340058   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:53.340648   13823 main.go:141] libmachine: (addons-959832) Found IP for machine: 192.168.39.98
	I0906 18:29:53.340677   13823 main.go:141] libmachine: (addons-959832) Reserving static IP address...
	I0906 18:29:53.340693   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has current primary IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:53.341097   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find host DHCP lease matching {name: "addons-959832", mac: "52:54:00:c2:2d:3d", ip: "192.168.39.98"} in network mk-addons-959832
	I0906 18:29:53.410890   13823 main.go:141] libmachine: (addons-959832) DBG | Getting to WaitForSSH function...
	I0906 18:29:53.410935   13823 main.go:141] libmachine: (addons-959832) Reserved static IP address: 192.168.39.98
	I0906 18:29:53.410957   13823 main.go:141] libmachine: (addons-959832) Waiting for SSH to be available...
	I0906 18:29:53.413061   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:53.413353   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832
	I0906 18:29:53.413381   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find defined IP address of network mk-addons-959832 interface with MAC address 52:54:00:c2:2d:3d
	I0906 18:29:53.413528   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH client type: external
	I0906 18:29:53.413551   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa (-rw-------)
	I0906 18:29:53.413582   13823 main.go:141] libmachine: (addons-959832) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:29:53.413596   13823 main.go:141] libmachine: (addons-959832) DBG | About to run SSH command:
	I0906 18:29:53.413610   13823 main.go:141] libmachine: (addons-959832) DBG | exit 0
	I0906 18:29:53.424764   13823 main.go:141] libmachine: (addons-959832) DBG | SSH cmd err, output: exit status 255: 
	I0906 18:29:53.424790   13823 main.go:141] libmachine: (addons-959832) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0906 18:29:53.424803   13823 main.go:141] libmachine: (addons-959832) DBG | command : exit 0
	I0906 18:29:53.424811   13823 main.go:141] libmachine: (addons-959832) DBG | err     : exit status 255
	I0906 18:29:53.424834   13823 main.go:141] libmachine: (addons-959832) DBG | output  : 
	I0906 18:29:56.425071   13823 main.go:141] libmachine: (addons-959832) DBG | Getting to WaitForSSH function...
	I0906 18:29:56.427965   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.428313   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.428337   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.428498   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH client type: external
	I0906 18:29:56.428529   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa (-rw-------)
	I0906 18:29:56.428584   13823 main.go:141] libmachine: (addons-959832) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:29:56.428611   13823 main.go:141] libmachine: (addons-959832) DBG | About to run SSH command:
	I0906 18:29:56.428625   13823 main.go:141] libmachine: (addons-959832) DBG | exit 0
	I0906 18:29:56.557151   13823 main.go:141] libmachine: (addons-959832) DBG | SSH cmd err, output: <nil>: 
	I0906 18:29:56.557379   13823 main.go:141] libmachine: (addons-959832) KVM machine creation complete!
	I0906 18:29:56.557702   13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
	I0906 18:29:56.558229   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:56.558444   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:56.558623   13823 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:29:56.558641   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:29:56.559843   13823 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:29:56.559860   13823 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:29:56.559867   13823 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:29:56.559876   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.562179   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.562551   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.562587   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.562760   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.562922   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.563071   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.563184   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.563323   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.563491   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.563501   13823 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:29:56.672324   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:29:56.672345   13823 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:29:56.672355   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.675030   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.675361   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.675396   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.675587   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.675810   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.675962   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.676117   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.676285   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.676485   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.676498   13823 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:29:56.789500   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:29:56.789599   13823 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:29:56.789615   13823 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:29:56.789627   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:56.789887   13823 buildroot.go:166] provisioning hostname "addons-959832"
	I0906 18:29:56.789910   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:56.790145   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.792479   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.792813   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.792840   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.792964   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.793128   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.793278   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.793413   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.793564   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.793755   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.793770   13823 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-959832 && echo "addons-959832" | sudo tee /etc/hostname
	I0906 18:29:56.923171   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-959832
	
	I0906 18:29:56.923196   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.925829   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.926137   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.926165   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.926301   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.926516   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.926688   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.926855   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.927018   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.927167   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.927182   13823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-959832' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-959832/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-959832' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:29:57.047682   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:29:57.047717   13823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 18:29:57.047760   13823 buildroot.go:174] setting up certificates
	I0906 18:29:57.047779   13823 provision.go:84] configureAuth start
	I0906 18:29:57.047796   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:57.048060   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:57.050451   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.050790   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.050828   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.050983   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.053241   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.053584   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.053615   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.053778   13823 provision.go:143] copyHostCerts
	I0906 18:29:57.053849   13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 18:29:57.054015   13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 18:29:57.054086   13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 18:29:57.054144   13823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.addons-959832 san=[127.0.0.1 192.168.39.98 addons-959832 localhost minikube]
	I0906 18:29:57.192700   13823 provision.go:177] copyRemoteCerts
	I0906 18:29:57.192756   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:29:57.192779   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.195474   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.195742   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.195770   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.195927   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.196116   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.196268   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.196488   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.284813   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:29:57.312554   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 18:29:57.338356   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:29:57.363612   13823 provision.go:87] duration metric: took 315.815529ms to configureAuth
	I0906 18:29:57.363640   13823 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:29:57.363826   13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:29:57.363907   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.366452   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.366841   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.366868   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.367008   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.367195   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.367349   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.367475   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.367620   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:57.367765   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:57.367779   13823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 18:29:57.603163   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 18:29:57.603188   13823 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:29:57.603196   13823 main.go:141] libmachine: (addons-959832) Calling .GetURL
	I0906 18:29:57.604560   13823 main.go:141] libmachine: (addons-959832) DBG | Using libvirt version 6000000
	I0906 18:29:57.606895   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.607175   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.607201   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.607398   13823 main.go:141] libmachine: Docker is up and running!
	I0906 18:29:57.607413   13823 main.go:141] libmachine: Reticulating splines...
	I0906 18:29:57.607421   13823 client.go:171] duration metric: took 27.082788539s to LocalClient.Create
	I0906 18:29:57.607447   13823 start.go:167] duration metric: took 27.082857245s to libmachine.API.Create "addons-959832"
	I0906 18:29:57.607462   13823 start.go:293] postStartSetup for "addons-959832" (driver="kvm2")
	I0906 18:29:57.607488   13823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:29:57.607514   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.607782   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:29:57.607801   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.609814   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.610081   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.610134   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.610226   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.610417   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.610608   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.610769   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.695798   13823 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:29:57.700464   13823 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:29:57.700493   13823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 18:29:57.700596   13823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 18:29:57.700630   13823 start.go:296] duration metric: took 93.15804ms for postStartSetup
	I0906 18:29:57.700663   13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
	I0906 18:29:57.701257   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:57.704196   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.704554   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.704585   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.704877   13823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json ...
	I0906 18:29:57.705072   13823 start.go:128] duration metric: took 27.1982419s to createHost
	I0906 18:29:57.705098   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.707499   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.707842   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.707862   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.708035   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.708256   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.708433   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.708569   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.708760   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:57.708991   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:57.709005   13823 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:29:57.821756   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725647397.800291454
	
	I0906 18:29:57.821779   13823 fix.go:216] guest clock: 1725647397.800291454
	I0906 18:29:57.821789   13823 fix.go:229] Guest: 2024-09-06 18:29:57.800291454 +0000 UTC Remote: 2024-09-06 18:29:57.705083739 +0000 UTC m=+27.297090225 (delta=95.207715ms)
	I0906 18:29:57.821840   13823 fix.go:200] guest clock delta is within tolerance: 95.207715ms
	I0906 18:29:57.821853   13823 start.go:83] releasing machines lock for "addons-959832", held for 27.315095887s
	I0906 18:29:57.821881   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.822185   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:57.824591   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.824964   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.824991   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.825103   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.825621   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.825837   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.825955   13823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:29:57.825998   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.826048   13823 ssh_runner.go:195] Run: cat /version.json
	I0906 18:29:57.826075   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.828396   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.828722   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.828752   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.828771   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.828910   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.829111   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.829201   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.829221   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.829287   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.829450   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.829463   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.829621   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.829749   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.829859   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.948786   13823 ssh_runner.go:195] Run: systemctl --version
	I0906 18:29:57.955191   13823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 18:29:58.113311   13823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:29:58.119769   13823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:29:58.119846   13823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:29:58.135762   13823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:29:58.135789   13823 start.go:495] detecting cgroup driver to use...
	I0906 18:29:58.135859   13823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:29:58.151729   13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:29:58.166404   13823 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:29:58.166473   13823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:29:58.180954   13823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:29:58.195119   13823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:29:58.315328   13823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:29:58.467302   13823 docker.go:233] disabling docker service ...
	I0906 18:29:58.467362   13823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:29:58.482228   13823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:29:58.495471   13823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:29:58.606896   13823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:29:58.717897   13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:29:58.732638   13823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:29:58.751394   13823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 18:29:58.751461   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.762265   13823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 18:29:58.762343   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.772625   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.783002   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.793237   13823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:29:58.804024   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.814731   13823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.832054   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.842905   13823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:29:58.852537   13823 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 18:29:58.852595   13823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 18:29:58.866354   13823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:29:58.877194   13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:29:59.004604   13823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 18:29:59.101439   13823 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 18:29:59.101538   13823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 18:29:59.106286   13823 start.go:563] Will wait 60s for crictl version
	I0906 18:29:59.106358   13823 ssh_runner.go:195] Run: which crictl
	I0906 18:29:59.110304   13823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:29:59.148807   13823 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 18:29:59.148953   13823 ssh_runner.go:195] Run: crio --version
	I0906 18:29:59.178394   13823 ssh_runner.go:195] Run: crio --version
	I0906 18:29:59.210051   13823 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 18:29:59.211504   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:59.214173   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:59.214515   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:59.214548   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:59.214703   13823 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:29:59.218969   13823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:29:59.231960   13823 kubeadm.go:883] updating cluster {Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 18:29:59.232084   13823 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:29:59.232129   13823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:29:59.263727   13823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 18:29:59.263807   13823 ssh_runner.go:195] Run: which lz4
	I0906 18:29:59.267901   13823 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 18:29:59.271879   13823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 18:29:59.271906   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 18:30:00.584417   13823 crio.go:462] duration metric: took 1.316553716s to copy over tarball
	I0906 18:30:00.584486   13823 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 18:30:02.812933   13823 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.228424681s)
	I0906 18:30:02.812968   13823 crio.go:469] duration metric: took 2.22852468s to extract the tarball
	I0906 18:30:02.812978   13823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 18:30:02.850138   13823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:30:02.893341   13823 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 18:30:02.893365   13823 cache_images.go:84] Images are preloaded, skipping loading
	I0906 18:30:02.893375   13823 kubeadm.go:934] updating node { 192.168.39.98 8443 v1.31.0 crio true true} ...
	I0906 18:30:02.893497   13823 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-959832 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:30:02.893579   13823 ssh_runner.go:195] Run: crio config
	I0906 18:30:02.943751   13823 cni.go:84] Creating CNI manager for ""
	I0906 18:30:02.943774   13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:30:02.943794   13823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 18:30:02.943823   13823 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.98 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-959832 NodeName:addons-959832 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 18:30:02.943970   13823 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-959832"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 18:30:02.944029   13823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:30:02.953978   13823 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 18:30:02.954045   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 18:30:02.963215   13823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 18:30:02.979953   13823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:30:02.996152   13823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0906 18:30:03.012715   13823 ssh_runner.go:195] Run: grep 192.168.39.98	control-plane.minikube.internal$ /etc/hosts
	I0906 18:30:03.016576   13823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:30:03.028370   13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:03.151085   13823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:03.168582   13823 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832 for IP: 192.168.39.98
	I0906 18:30:03.168607   13823 certs.go:194] generating shared ca certs ...
	I0906 18:30:03.168628   13823 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.168788   13823 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 18:30:03.299866   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt ...
	I0906 18:30:03.299897   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt: {Name:mke2b7c471d9f59e720011f7b10016af11ee9297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.300069   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key ...
	I0906 18:30:03.300084   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key: {Name:mkfac70472d4bba2ebe5c985be8bd475bcc6f548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.300181   13823 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 18:30:03.425280   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt ...
	I0906 18:30:03.425310   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt: {Name:mk08fa1d396d35f7ec100676e804094098a4d70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.425492   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key ...
	I0906 18:30:03.425520   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key: {Name:mk8fe87021c9d97780410b17544e3c226973cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.425623   13823 certs.go:256] generating profile certs ...
	I0906 18:30:03.425675   13823 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key
	I0906 18:30:03.425689   13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt with IP's: []
	I0906 18:30:03.659418   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt ...
	I0906 18:30:03.659450   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: {Name:mk0f9c2f503201837abe2d4909970e9be7ff24f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.659616   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key ...
	I0906 18:30:03.659626   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key: {Name:mkdc65ba0a6775a2f0eae4f7b7974195d86c87d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.659695   13823 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e
	I0906 18:30:03.659712   13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.98]
	I0906 18:30:03.747012   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e ...
	I0906 18:30:03.747038   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e: {Name:mkac8ea9fd65a4ebd10dcac540165d914ce7db8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.747178   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e ...
	I0906 18:30:03.747192   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e: {Name:mk4a1ef0165a60b29c7ae52805cfb6305e8fcd01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.747259   13823 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt
	I0906 18:30:03.747327   13823 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key
	I0906 18:30:03.747377   13823 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key
	I0906 18:30:03.747394   13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt with IP's: []
	I0906 18:30:03.959127   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt ...
	I0906 18:30:03.959155   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt: {Name:mkde7bd5ab135e6d5e9a29c7a353c7a7ff8f667c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.959314   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key ...
	I0906 18:30:03.959329   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key: {Name:mkaff3d579d60be2767a53917ba5e3ae0b22c412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.959489   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:30:03.959520   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:30:03.959543   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:30:03.959565   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 18:30:03.960109   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:30:03.987472   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 18:30:04.010859   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:30:04.045335   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:30:04.069442   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 18:30:04.096260   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 18:30:04.121182   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:30:04.149817   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:30:04.173890   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:30:04.197498   13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 18:30:04.216950   13823 ssh_runner.go:195] Run: openssl version
	I0906 18:30:04.222654   13823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:30:04.233330   13823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:04.237701   13823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:04.237760   13823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:04.243532   13823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:30:04.256013   13823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:30:04.260734   13823 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:30:04.260787   13823 kubeadm.go:392] StartCluster: {Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:30:04.260898   13823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 18:30:04.260952   13823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 18:30:04.303067   13823 cri.go:89] found id: ""
	I0906 18:30:04.303126   13823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 18:30:04.313281   13823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 18:30:04.324983   13823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 18:30:04.335214   13823 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 18:30:04.335235   13823 kubeadm.go:157] found existing configuration files:
	
	I0906 18:30:04.335277   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 18:30:04.344648   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 18:30:04.344695   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 18:30:04.354421   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 18:30:04.363814   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 18:30:04.363883   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 18:30:04.373191   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 18:30:04.382426   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 18:30:04.382489   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 18:30:04.392389   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 18:30:04.402110   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 18:30:04.402181   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 18:30:04.411730   13823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 18:30:04.463645   13823 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 18:30:04.463694   13823 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 18:30:04.559431   13823 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 18:30:04.559574   13823 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 18:30:04.559691   13823 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 18:30:04.568785   13823 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 18:30:04.633550   13823 out.go:235]   - Generating certificates and keys ...
	I0906 18:30:04.633656   13823 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 18:30:04.633738   13823 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 18:30:04.850232   13823 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 18:30:05.028833   13823 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 18:30:05.198669   13823 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 18:30:05.265171   13823 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 18:30:05.396138   13823 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 18:30:05.396314   13823 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-959832 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
	I0906 18:30:05.615454   13823 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 18:30:05.615825   13823 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-959832 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
	I0906 18:30:05.699300   13823 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 18:30:05.879000   13823 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 18:30:05.979662   13823 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 18:30:05.979866   13823 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 18:30:06.143465   13823 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 18:30:06.399160   13823 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 18:30:06.612959   13823 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 18:30:06.801192   13823 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 18:30:06.957635   13823 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 18:30:06.958075   13823 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 18:30:06.960513   13823 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 18:30:06.962637   13823 out.go:235]   - Booting up control plane ...
	I0906 18:30:06.962755   13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 18:30:06.962853   13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 18:30:06.962936   13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 18:30:06.982006   13823 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 18:30:06.987635   13823 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 18:30:06.987741   13823 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 18:30:07.107392   13823 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 18:30:07.107507   13823 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 18:30:07.608684   13823 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.950467ms
	I0906 18:30:07.608794   13823 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 18:30:12.608494   13823 kubeadm.go:310] [api-check] The API server is healthy after 5.001776937s
	I0906 18:30:12.627560   13823 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 18:30:12.653476   13823 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 18:30:12.689334   13823 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 18:30:12.689602   13823 kubeadm.go:310] [mark-control-plane] Marking the node addons-959832 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 18:30:12.704990   13823 kubeadm.go:310] [bootstrap-token] Using token: ithoaf.u83bc4nltc0uwhpo
	I0906 18:30:12.706456   13823 out.go:235]   - Configuring RBAC rules ...
	I0906 18:30:12.706574   13823 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 18:30:12.717372   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 18:30:12.735384   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 18:30:12.742188   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 18:30:12.748903   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 18:30:12.753193   13823 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 18:30:13.018036   13823 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 18:30:13.440120   13823 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 18:30:14.029827   13823 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 18:30:14.029853   13823 kubeadm.go:310] 
	I0906 18:30:14.029954   13823 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 18:30:14.029981   13823 kubeadm.go:310] 
	I0906 18:30:14.030093   13823 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 18:30:14.030104   13823 kubeadm.go:310] 
	I0906 18:30:14.030140   13823 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 18:30:14.030226   13823 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 18:30:14.030309   13823 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 18:30:14.030318   13823 kubeadm.go:310] 
	I0906 18:30:14.030403   13823 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 18:30:14.030428   13823 kubeadm.go:310] 
	I0906 18:30:14.030488   13823 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 18:30:14.030498   13823 kubeadm.go:310] 
	I0906 18:30:14.030561   13823 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 18:30:14.030660   13823 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 18:30:14.030776   13823 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 18:30:14.030796   13823 kubeadm.go:310] 
	I0906 18:30:14.030915   13823 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 18:30:14.031015   13823 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 18:30:14.031028   13823 kubeadm.go:310] 
	I0906 18:30:14.031132   13823 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ithoaf.u83bc4nltc0uwhpo \
	I0906 18:30:14.031273   13823 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 18:30:14.031306   13823 kubeadm.go:310] 	--control-plane 
	I0906 18:30:14.031316   13823 kubeadm.go:310] 
	I0906 18:30:14.031450   13823 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 18:30:14.031472   13823 kubeadm.go:310] 
	I0906 18:30:14.031592   13823 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ithoaf.u83bc4nltc0uwhpo \
	I0906 18:30:14.031750   13823 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 18:30:14.032620   13823 kubeadm.go:310] W0906 18:30:04.444733     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:14.033044   13823 kubeadm.go:310] W0906 18:30:04.446560     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:14.033225   13823 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 18:30:14.033247   13823 cni.go:84] Creating CNI manager for ""
	I0906 18:30:14.033257   13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:30:14.035685   13823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 18:30:14.037043   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 18:30:14.051040   13823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 18:30:14.080330   13823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 18:30:14.080403   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:14.080418   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-959832 minikube.k8s.io/updated_at=2024_09_06T18_30_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-959832 minikube.k8s.io/primary=true
	I0906 18:30:14.123199   13823 ops.go:34] apiserver oom_adj: -16
	I0906 18:30:14.247505   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:14.748250   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:15.248440   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:15.747562   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:16.247913   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:16.747636   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.248181   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.748128   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.838400   13823 kubeadm.go:1113] duration metric: took 3.758062138s to wait for elevateKubeSystemPrivileges
	I0906 18:30:17.838441   13823 kubeadm.go:394] duration metric: took 13.577657427s to StartCluster
	I0906 18:30:17.838464   13823 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:17.838613   13823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:30:17.839096   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:17.839337   13823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 18:30:17.839344   13823 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:30:17.839425   13823 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 18:30:17.839549   13823 addons.go:69] Setting yakd=true in profile "addons-959832"
	I0906 18:30:17.839564   13823 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-959832"
	I0906 18:30:17.839564   13823 addons.go:69] Setting helm-tiller=true in profile "addons-959832"
	I0906 18:30:17.839600   13823 addons.go:69] Setting storage-provisioner=true in profile "addons-959832"
	I0906 18:30:17.839601   13823 addons.go:69] Setting inspektor-gadget=true in profile "addons-959832"
	I0906 18:30:17.839616   13823 addons.go:234] Setting addon storage-provisioner=true in "addons-959832"
	I0906 18:30:17.839621   13823 addons.go:234] Setting addon inspektor-gadget=true in "addons-959832"
	I0906 18:30:17.839625   13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:30:17.839635   13823 addons.go:234] Setting addon helm-tiller=true in "addons-959832"
	I0906 18:30:17.839624   13823 addons.go:69] Setting ingress-dns=true in profile "addons-959832"
	I0906 18:30:17.839656   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839680   13823 addons.go:234] Setting addon ingress-dns=true in "addons-959832"
	I0906 18:30:17.839708   13823 addons.go:69] Setting metrics-server=true in profile "addons-959832"
	I0906 18:30:17.839721   13823 addons.go:69] Setting gcp-auth=true in profile "addons-959832"
	I0906 18:30:17.839706   13823 addons.go:69] Setting ingress=true in profile "addons-959832"
	I0906 18:30:17.839737   13823 addons.go:234] Setting addon metrics-server=true in "addons-959832"
	I0906 18:30:17.839738   13823 mustload.go:65] Loading cluster: addons-959832
	I0906 18:30:17.839744   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839683   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839951   13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:30:17.840149   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840201   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.840215   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840233   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.839763   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.840319   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840341   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.840156   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.839590   13823 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-959832"
	I0906 18:30:17.840465   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.840490   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839591   13823 addons.go:69] Setting registry=true in profile "addons-959832"
	I0906 18:30:17.840596   13823 addons.go:234] Setting addon registry=true in "addons-959832"
	I0906 18:30:17.840637   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.840665   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840688   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.841280   13823 out.go:177] * Verifying Kubernetes components...
	I0906 18:30:17.839582   13823 addons.go:234] Setting addon yakd=true in "addons-959832"
	I0906 18:30:17.841416   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839685   13823 addons.go:69] Setting volcano=true in profile "addons-959832"
	I0906 18:30:17.841566   13823 addons.go:234] Setting addon volcano=true in "addons-959832"
	I0906 18:30:17.839689   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.841626   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.841783   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.841812   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.841859   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.839695   13823 addons.go:69] Setting cloud-spanner=true in profile "addons-959832"
	I0906 18:30:17.841931   13823 addons.go:234] Setting addon cloud-spanner=true in "addons-959832"
	I0906 18:30:17.841963   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.841970   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.841989   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.841816   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.842303   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.842321   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.842543   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.842595   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.839696   13823 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-959832"
	I0906 18:30:17.842884   13823 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-959832"
	I0906 18:30:17.839699   13823 addons.go:69] Setting volumesnapshots=true in profile "addons-959832"
	I0906 18:30:17.839713   13823 addons.go:69] Setting default-storageclass=true in profile "addons-959832"
	I0906 18:30:17.839762   13823 addons.go:234] Setting addon ingress=true in "addons-959832"
	I0906 18:30:17.842835   13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:17.839705   13823 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-959832"
	I0906 18:30:17.843210   13823 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-959832"
	I0906 18:30:17.843351   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.843531   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.843563   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.843835   13823 addons.go:234] Setting addon volumesnapshots=true in "addons-959832"
	I0906 18:30:17.843857   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.844006   13823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-959832"
	I0906 18:30:17.844352   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.844369   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.853075   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.861521   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0906 18:30:17.862212   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.862927   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.862953   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.863254   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
	I0906 18:30:17.863342   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.863358   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0906 18:30:17.864034   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.864195   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.864234   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.864508   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.864529   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.864924   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.868974   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.869351   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.869398   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.869553   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.869575   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.879527   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39157
	I0906 18:30:17.879542   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0906 18:30:17.879654   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0906 18:30:17.879684   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0906 18:30:17.879760   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.881648   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.885011   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.885160   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I0906 18:30:17.885420   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.885459   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.885971   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.886011   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.886343   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.886375   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.886602   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.886665   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.886686   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.886716   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.886809   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
	I0906 18:30:17.886904   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
	I0906 18:30:17.887101   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887199   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887215   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887238   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887599   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.888208   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.888371   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888383   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888541   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888561   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888566   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.888701   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888711   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888743   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888754   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888780   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.889687   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.889730   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.889761   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.889889   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.889901   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.889943   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.889978   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.890062   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.890069   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.890553   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.890607   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.891323   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.891899   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.891930   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.892658   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.892934   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.893002   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.893143   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.893184   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.893806   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.893854   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.894913   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.894960   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.895352   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.895805   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.895847   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.897573   13823 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0906 18:30:17.899434   13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 18:30:17.899459   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0906 18:30:17.899481   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.903071   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.903469   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.903516   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.903739   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.903926   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.904048   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.904161   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.911366   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
	I0906 18:30:17.912019   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.912706   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.912741   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.913185   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.913911   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.913970   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.916304   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0906 18:30:17.916921   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.917609   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.917631   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.918094   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.918809   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.918849   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.920068   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34889
	I0906 18:30:17.920527   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.921055   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.921080   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.921442   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.921621   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.923561   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.924047   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0906 18:30:17.924598   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.925400   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.925427   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.925816   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.925833   13823 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 18:30:17.926025   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.927332   13823 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 18:30:17.927362   13823 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 18:30:17.927413   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.928541   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.931169   13823 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 18:30:17.932027   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.932560   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.932588   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.932970   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0906 18:30:17.933032   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 18:30:17.933049   13823 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 18:30:17.933073   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.933158   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.933325   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.933426   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.933566   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.934213   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.934915   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.934933   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.935404   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.935557   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0906 18:30:17.935722   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.936009   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.936810   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42513
	I0906 18:30:17.937524   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.938126   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.938143   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.938211   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.938388   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.938402   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.938499   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.938891   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.938931   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.938946   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.938969   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.939155   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.939625   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.939703   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.939744   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.939784   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.939923   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.940763   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.941678   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.943308   13823 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0906 18:30:17.943311   13823 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 18:30:17.944079   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41849
	I0906 18:30:17.944771   13823 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:17.944801   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 18:30:17.944819   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.944775   13823 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:17.944907   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 18:30:17.944920   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.948201   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.948657   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.948689   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.948842   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.949234   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.949990   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.950029   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.950282   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.950943   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42051
	I0906 18:30:17.950969   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.950989   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.951044   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
	I0906 18:30:17.951238   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.951466   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.951515   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.951465   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.952056   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.952066   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.952073   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.952082   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.952138   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.952155   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.952344   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.952631   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.952687   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.952826   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.952846   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.953106   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.953314   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.953375   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0906 18:30:17.953914   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.953936   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.954109   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.954862   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0906 18:30:17.955016   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.955377   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.955393   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.955452   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.955793   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.955962   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.955973   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0906 18:30:17.956660   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.956816   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.956830   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.957324   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.957345   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.957414   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.957813   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.957859   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.958442   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.958480   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.959016   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.960122   13823 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-959832"
	I0906 18:30:17.960157   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.960504   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.960508   13823 addons.go:234] Setting addon default-storageclass=true in "addons-959832"
	I0906 18:30:17.960533   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.960553   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.960773   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:17.960927   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.960957   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.961028   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0906 18:30:17.963299   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0906 18:30:17.963616   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.964149   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.964171   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.964676   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.964848   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.965817   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:17.966420   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.967088   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0906 18:30:17.967322   13823 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:17.967345   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 18:30:17.967363   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.967560   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.968670   13823 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 18:30:17.969763   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.969781   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.970095   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 18:30:17.970112   13823 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 18:30:17.970131   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.970337   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0906 18:30:17.970743   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.971382   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.971385   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.971412   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.972059   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.972078   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.972319   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.972519   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.972712   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.972912   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.973203   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.974390   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.974410   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.975147   13823 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 18:30:17.975803   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.976343   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.976370   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.976539   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.976705   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.976816   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.976940   13823 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:17.976955   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 18:30:17.976970   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.977663   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.978180   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.978553   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.980971   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.981520   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.981539   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.981727   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.981897   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.982079   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.982239   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.983455   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
	I0906 18:30:17.983619   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0906 18:30:17.984075   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.984656   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.984672   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.984763   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0906 18:30:17.984898   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.985019   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.985969   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.985992   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.986044   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.986161   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.986175   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.986855   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.986875   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.987256   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.987509   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.988050   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.988397   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 18:30:17.988950   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I0906 18:30:17.989105   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.989288   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.989355   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.989528   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.989938   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.989956   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.990021   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:17.990028   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:17.990027   13823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 18:30:17.990240   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:17.990252   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:17.990260   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:17.990268   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:17.990348   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.990523   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:17.990554   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:17.990563   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 18:30:17.990634   13823 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0906 18:30:17.990673   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.990882   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.991485   13823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:17.991505   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 18:30:17.991523   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.992446   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 18:30:17.992494   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 18:30:17.992990   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I0906 18:30:17.993671   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.994204   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 18:30:17.994221   13823 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 18:30:17.994276   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.994304   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.994314   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.994319   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.994675   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.994705   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.995095   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.995127   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.995287   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.995320   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 18:30:17.995468   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.995609   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.995687   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.995715   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.995789   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.996063   13823 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 18:30:17.997430   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.997701   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 18:30:17.997900   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.997927   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.998085   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.998251   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.999429   13823 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 18:30:18.000423   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33437
	I0906 18:30:18.000443   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.000610   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 18:30:18.000700   13823 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 18:30:18.000713   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 18:30:18.000733   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.000992   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.001111   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:18.001653   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:18.001671   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:18.002038   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:18.002683   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:18.002727   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:18.003368   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 18:30:18.003618   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.003952   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.003970   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.004139   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.004273   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.004359   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.004434   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.005728   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 18:30:18.006862   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 18:30:18.007852   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 18:30:18.007870   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 18:30:18.007888   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.010752   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.011133   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.011162   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.011278   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.011435   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.011556   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.011677   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.019869   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0906 18:30:18.025324   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:18.025853   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:18.025867   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	W0906 18:30:18.026199   13823 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37452->192.168.39.98:22: read: connection reset by peer
	I0906 18:30:18.026228   13823 retry.go:31] will retry after 165.921545ms: ssh: handshake failed: read tcp 192.168.39.1:37452->192.168.39.98:22: read: connection reset by peer
	I0906 18:30:18.026287   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:18.026483   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:18.028221   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:18.028440   13823 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:18.028451   13823 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 18:30:18.028463   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.030594   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.030951   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.030970   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.031122   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.031278   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.031416   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.031526   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.046424   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I0906 18:30:18.046881   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:18.047847   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:18.047876   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:18.048219   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:18.048439   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:18.050153   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:18.052332   13823 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 18:30:18.054123   13823 out.go:177]   - Using image docker.io/busybox:stable
	I0906 18:30:18.055683   13823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:18.055715   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 18:30:18.055735   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.058890   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.059267   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.059308   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.059467   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.059660   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.059835   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.059965   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.325758   13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 18:30:18.325780   13823 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0906 18:30:18.462745   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:18.498367   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:18.542161   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 18:30:18.542189   13823 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 18:30:18.544357   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 18:30:18.544383   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 18:30:18.562318   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:18.591769   13823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:18.592321   13823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 18:30:18.615892   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:18.619170   13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 18:30:18.619198   13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 18:30:18.623393   13823 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 18:30:18.623412   13823 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 18:30:18.632558   13823 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 18:30:18.632587   13823 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 18:30:18.642554   13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:18.642577   13823 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0906 18:30:18.646434   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:18.712949   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:18.744354   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 18:30:18.744376   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 18:30:18.745893   13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 18:30:18.745909   13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 18:30:18.758057   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:18.794329   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 18:30:18.794351   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 18:30:18.810523   13823 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:18.810541   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 18:30:18.819725   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:18.820412   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 18:30:18.820430   13823 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 18:30:18.870635   13823 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 18:30:18.870657   13823 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 18:30:18.955167   13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 18:30:18.955193   13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 18:30:19.024347   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 18:30:19.024371   13823 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 18:30:19.036090   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 18:30:19.036117   13823 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 18:30:19.061575   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 18:30:19.061599   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 18:30:19.063347   13823 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 18:30:19.063362   13823 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 18:30:19.071318   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:19.185778   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 18:30:19.185801   13823 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 18:30:19.198921   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:19.198940   13823 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 18:30:19.225401   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:19.225422   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 18:30:19.250965   13823 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 18:30:19.250991   13823 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 18:30:19.295032   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 18:30:19.295064   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 18:30:19.560881   13823 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:19.560903   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 18:30:19.605732   13823 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 18:30:19.605761   13823 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 18:30:19.605857   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:19.639600   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 18:30:19.639626   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 18:30:19.651766   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:19.815029   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:19.831850   13823 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 18:30:19.831883   13823 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 18:30:19.953978   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 18:30:19.953997   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 18:30:20.091151   13823 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:20.091171   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 18:30:20.208365   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 18:30:20.208395   13823 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 18:30:20.322907   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:20.592180   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 18:30:20.592203   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 18:30:20.866215   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 18:30:20.866237   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 18:30:21.296320   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:21.296345   13823 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 18:30:21.533570   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:23.237459   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.774672195s)
	I0906 18:30:23.237524   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.237547   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.237911   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.237986   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.238006   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.238024   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.238036   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.238294   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.238313   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.751842   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.253438201s)
	I0906 18:30:23.751900   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.751914   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.751912   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.18956267s)
	I0906 18:30:23.751952   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.751967   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752014   13823 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.160216467s)
	I0906 18:30:23.752042   13823 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.159701916s)
	I0906 18:30:23.752057   13823 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 18:30:23.752091   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.136171256s)
	I0906 18:30:23.752131   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752144   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752372   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752387   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.752396   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752402   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752419   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752432   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.752442   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752445   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752450   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752518   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752555   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752587   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.752603   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752619   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752674   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752715   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752737   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752746   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.753079   13823 node_ready.go:35] waiting up to 6m0s for node "addons-959832" to be "Ready" ...
	I0906 18:30:23.753223   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.753238   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.753335   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.753364   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.753380   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.817790   13823 node_ready.go:49] node "addons-959832" has status "Ready":"True"
	I0906 18:30:23.817814   13823 node_ready.go:38] duration metric: took 64.714897ms for node "addons-959832" to be "Ready" ...
	I0906 18:30:23.817823   13823 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:23.864694   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.864718   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.864768   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.864803   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.865089   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.865109   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.865155   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.865189   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.865203   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 18:30:23.865293   13823 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0906 18:30:23.895688   13823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:24.386851   13823 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-959832" context rescaled to 1 replicas
	I0906 18:30:24.986957   13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 18:30:24.987010   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:24.990148   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:24.990559   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:24.990592   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:24.990724   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:24.990958   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:24.991131   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:24.991298   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:25.501366   13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 18:30:25.593869   13823 addons.go:234] Setting addon gcp-auth=true in "addons-959832"
	I0906 18:30:25.593929   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:25.594221   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:25.594261   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:25.609081   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0906 18:30:25.609512   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:25.609995   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:25.610010   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:25.610361   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:25.610997   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:25.611034   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:25.625831   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0906 18:30:25.626278   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:25.626760   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:25.626788   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:25.627170   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:25.627386   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:25.629014   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:25.629236   13823 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 18:30:25.629259   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:25.631653   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:25.632049   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:25.632077   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:25.632216   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:25.632399   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:25.632555   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:25.632700   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:25.941079   13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:27.481753   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.835292795s)
	I0906 18:30:27.481764   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.768781047s)
	I0906 18:30:27.481804   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481809   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.723718351s)
	I0906 18:30:27.481827   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481815   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481841   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481846   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481854   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481864   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.662110283s)
	I0906 18:30:27.481888   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481903   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481917   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.410575966s)
	I0906 18:30:27.481932   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481941   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481953   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.876072516s)
	I0906 18:30:27.481973   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481985   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482084   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.830290669s)
	I0906 18:30:27.482101   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482111   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482256   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.667196336s)
	I0906 18:30:27.482281   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	W0906 18:30:27.482296   13823 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:27.482317   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482323   13823 retry.go:31] will retry after 254.362145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:27.482304   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482348   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482355   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482362   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482365   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482369   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482372   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482374   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482381   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482386   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482391   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482395   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482402   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482411   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482419   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482426   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482399   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.159419479s)
	I0906 18:30:27.482444   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482451   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482456   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482461   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482466   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482475   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482891   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482928   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482936   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482392   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482433   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.484341   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.484358   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.484374   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.484397   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.484405   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.484413   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.484420   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.484462   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.484469   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.484477   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.484484   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.485863   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485876   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485887   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485896   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485904   13823 addons.go:475] Verifying addon metrics-server=true in "addons-959832"
	I0906 18:30:27.485927   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.485930   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485938   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485943   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485946   13823 addons.go:475] Verifying addon ingress=true in "addons-959832"
	I0906 18:30:27.485950   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485997   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486046   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486077   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.486084   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485864   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486513   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486554   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.486562   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.487477   13823 out.go:177] * Verifying ingress addon...
	I0906 18:30:27.487573   13823 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-959832 service yakd-dashboard -n yakd-dashboard
	
	I0906 18:30:27.486024   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.487691   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.487717   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.487728   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.487937   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.487952   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.487960   13823 addons.go:475] Verifying addon registry=true in "addons-959832"
	I0906 18:30:27.487962   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.489109   13823 out.go:177] * Verifying registry addon...
	I0906 18:30:27.490025   13823 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 18:30:27.490703   13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 18:30:27.494994   13823 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 18:30:27.495014   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:27.495422   13823 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 18:30:27.495442   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:27.737115   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:27.995783   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:27.996316   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:28.405776   13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:28.525889   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:28.526140   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.000232   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:29.000400   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.288925   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.755298783s)
	I0906 18:30:29.288949   13823 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.659689548s)
	I0906 18:30:29.288969   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.288980   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.289345   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.289363   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.289373   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.289381   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.289348   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:29.289643   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.289659   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.289670   13823 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-959832"
	I0906 18:30:29.290527   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:29.291464   13823 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 18:30:29.293133   13823 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 18:30:29.293804   13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 18:30:29.294483   13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 18:30:29.294501   13823 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 18:30:29.307557   13823 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 18:30:29.307575   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:29.501347   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:29.502636   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.549399   13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 18:30:29.549424   13823 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 18:30:29.631326   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.894156301s)
	I0906 18:30:29.631395   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.631409   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.631783   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.631805   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.631809   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:29.631815   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.631831   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.632053   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.632067   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.711353   13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:29.711373   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 18:30:29.758533   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:29.798367   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:29.994829   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.995464   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:30.298814   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:30.494755   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:30.495217   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:30.800377   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:30.927844   13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:31.011246   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:31.011996   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:31.259074   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.500495277s)
	I0906 18:30:31.259136   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:31.259150   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:31.259463   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:31.259567   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:31.259547   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:31.259579   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:31.259614   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:31.259913   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:31.259930   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:31.259955   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:31.261909   13823 addons.go:475] Verifying addon gcp-auth=true in "addons-959832"
	I0906 18:30:31.263787   13823 out.go:177] * Verifying gcp-auth addon...
	I0906 18:30:31.265893   13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 18:30:31.298469   13823 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:30:31.298489   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:31.300480   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:31.497017   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:31.497257   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:31.769388   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:31.798048   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:31.995495   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:31.995656   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:32.269836   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:32.298842   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:32.495206   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:32.496478   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:32.769455   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:32.798535   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:32.905084   13823 pod_ready.go:98] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.98 HostIPs:[{IP:192.168.39.
98}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-06 18:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:23 +0000 UTC,FinishedAt:2024-09-06 18:30:30 +0000 UTC,ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2 Started:0xc0020651d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000b9f530} {Name:kube-api-access-fjvjc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000b9f540}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0906 18:30:32.905113   13823 pod_ready.go:82] duration metric: took 9.009398679s for pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace to be "Ready" ...
	E0906 18:30:32.905127   13823 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.98 HostIPs:[{IP:192.168.39.98}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-06 18:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:23 +0000 UTC,FinishedAt:2024-09-06 18:30:30 +0000 UTC,ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2 Started:0xc0020651d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000b9f530} {Name:kube-api-access-fjvjc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc000b9f540}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0906 18:30:32.905141   13823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.911075   13823 pod_ready.go:93] pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.911105   13823 pod_ready.go:82] duration metric: took 5.954486ms for pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.911119   13823 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.928213   13823 pod_ready.go:93] pod "etcd-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.928234   13823 pod_ready.go:82] duration metric: took 17.107089ms for pod "etcd-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.928244   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.942443   13823 pod_ready.go:93] pod "kube-apiserver-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.942474   13823 pod_ready.go:82] duration metric: took 14.222157ms for pod "kube-apiserver-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.942489   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.948544   13823 pod_ready.go:93] pod "kube-controller-manager-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.948568   13823 pod_ready.go:82] duration metric: took 6.069443ms for pod "kube-controller-manager-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.948594   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-df5wg" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.995554   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:32.996027   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:33.270077   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:33.300133   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:33.300322   13823 pod_ready.go:93] pod "kube-proxy-df5wg" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:33.300343   13823 pod_ready.go:82] duration metric: took 351.740369ms for pod "kube-proxy-df5wg" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.300356   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.494781   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:33.495847   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:33.701424   13823 pod_ready.go:93] pod "kube-scheduler-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:33.701467   13823 pod_ready.go:82] duration metric: took 401.098684ms for pod "kube-scheduler-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.701495   13823 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.769360   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:33.798021   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:33.995683   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:33.997103   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.270015   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:34.299221   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:34.495406   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:34.496126   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.770094   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:34.799237   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:34.996508   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.997585   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:35.270568   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:35.299394   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:35.495141   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:35.495320   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:35.707531   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:35.770986   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:35.800293   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:35.996725   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:35.997639   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:36.270981   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:36.303214   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:36.494976   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:36.496783   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.771081   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:36.799874   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:36.995676   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.996010   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:37.270120   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:37.299046   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:37.494705   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:37.496067   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.707603   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:37.769678   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:37.798583   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:37.995037   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.995885   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:38.269217   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:38.298643   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:38.495448   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:38.495856   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:38.769730   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:38.799711   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.083640   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.083787   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:39.496519   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:39.496908   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:39.497701   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.499783   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.769883   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:39.798544   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.994338   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.995398   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:40.209006   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:40.272568   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:40.301397   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.498136   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.498526   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:40.770814   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:40.798522   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.994052   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.995394   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.270657   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:41.298770   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.498318   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.498596   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:41.770854   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:41.799666   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.995027   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.995612   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.270017   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:42.299094   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.592984   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.595535   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:42.721960   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:42.772381   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:42.799751   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.995172   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.995508   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.272873   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:43.298467   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.494939   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.495402   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.769785   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:43.798713   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.996443   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.996744   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.269175   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:44.308002   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.494478   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:44.494986   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.770210   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:44.797768   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.995782   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.997472   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.207350   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:45.269487   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:45.298388   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.494409   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.494479   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.769970   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:45.798375   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.995583   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.995736   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.269632   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:46.299154   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.495331   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.495578   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:46.769857   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:46.799172   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.995967   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.996352   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.207412   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:47.270222   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:47.300058   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.501228   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:47.501496   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.769887   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:47.798711   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.994453   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.994618   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.270499   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:48.298587   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.494874   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.494941   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.771487   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:48.799341   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.995078   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.995997   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.270055   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:49.297759   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.493704   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.496397   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.707766   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:49.769942   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:49.799020   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.994521   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.995871   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.269405   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:50.298442   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.495620   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:50.496486   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.876382   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:50.877156   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.996700   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.996938   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.269377   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:51.298953   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.495015   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.495481   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.708764   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:51.770620   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:51.798067   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.994702   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.995528   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.269440   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:52.298688   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.496129   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.497284   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:52.769844   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:52.799404   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.995549   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.995828   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.272511   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:53.299182   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.495690   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.498212   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.769884   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:53.799759   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.994840   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.994970   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.208168   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:54.270994   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:54.301366   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.494638   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:54.495314   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.769283   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:54.797866   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.272696   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.272743   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:55.272998   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.298147   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.495547   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.495711   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.770496   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:55.802302   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.995386   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.995623   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:56.268801   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:56.298461   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.494963   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.495882   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.291534   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.291868   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:57.292073   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:57.292099   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.293348   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.309051   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:57.309858   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.312884   13823 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:57.312900   13823 pod_ready.go:82] duration metric: took 23.611395425s for pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:57.312922   13823 pod_ready.go:39] duration metric: took 33.495084445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:57.312943   13823 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:30:57.312998   13823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:57.342569   13823 api_server.go:72] duration metric: took 39.503199537s to wait for apiserver process to appear ...
	I0906 18:30:57.342597   13823 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:30:57.342618   13823 api_server.go:253] Checking apiserver healthz at https://192.168.39.98:8443/healthz ...
	I0906 18:30:57.347032   13823 api_server.go:279] https://192.168.39.98:8443/healthz returned 200:
	ok
	I0906 18:30:57.348263   13823 api_server.go:141] control plane version: v1.31.0
	I0906 18:30:57.348287   13823 api_server.go:131] duration metric: took 5.682402ms to wait for apiserver health ...
	I0906 18:30:57.348297   13823 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:30:57.359723   13823 system_pods.go:59] 18 kube-system pods found
	I0906 18:30:57.359757   13823 system_pods.go:61] "coredns-6f6b679f8f-d5d26" [8f56a285-a4a2-42b2-b904-86d4b92e1593] Running
	I0906 18:30:57.359769   13823 system_pods.go:61] "csi-hostpath-attacher-0" [077a752a-2398-4e94-b907-d0888261774c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:57.359778   13823 system_pods.go:61] "csi-hostpath-resizer-0" [4d49487b-d00b-4ee7-8007-fc440aad009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:30:57.359790   13823 system_pods.go:61] "csi-hostpathplugin-j7df9" [146029b8-76c4-479b-8217-00a90921e5d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:57.359800   13823 system_pods.go:61] "etcd-addons-959832" [2517086a-0030-456f-a07a-8973652d205c] Running
	I0906 18:30:57.359806   13823 system_pods.go:61] "kube-apiserver-addons-959832" [c93b4ce0-62b0-4e1f-9a98-76b6e7ad4fbc] Running
	I0906 18:30:57.359815   13823 system_pods.go:61] "kube-controller-manager-addons-959832" [3dc3e2e0-cdf7-4d83-8d8e-5cc86d87c45b] Running
	I0906 18:30:57.359820   13823 system_pods.go:61] "kube-ingress-dns-minikube" [1673a19c-a4a9-4d9d-bda1-e073fb44b3d8] Running
	I0906 18:30:57.359826   13823 system_pods.go:61] "kube-proxy-df5wg" [f92f8a67-fa25-410a-b7f6-928c602e53e5] Running
	I0906 18:30:57.359829   13823 system_pods.go:61] "kube-scheduler-addons-959832" [0a2458fe-333d-4ca7-b2ab-c58159f3a491] Running
	I0906 18:30:57.359834   13823 system_pods.go:61] "metrics-server-84c5f94fbc-flnx5" [01d423d8-1a69-47b2-be5a-57dc6f3f7268] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:57.359840   13823 system_pods.go:61] "nvidia-device-plugin-daemonset-nsxpz" [c35f7718-6879-4edb-9a8b-5b4a82ad2a7c] Running
	I0906 18:30:57.359846   13823 system_pods.go:61] "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:57.359852   13823 system_pods.go:61] "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:57.359858   13823 system_pods.go:61] "snapshot-controller-56fcc65765-db2j5" [afcb8d14-41d7-444b-b16d-496ca520ee39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.359867   13823 system_pods.go:61] "snapshot-controller-56fcc65765-jjdrv" [d3df181f-bfa3-4ef4-9767-ecc84c335cc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.359871   13823 system_pods.go:61] "storage-provisioner" [a837ebf7-7140-4baa-8b93-ea556996b204] Running
	I0906 18:30:57.359877   13823 system_pods.go:61] "tiller-deploy-b48cc5f79-d2ggh" [5951b042-9892-4eb8-b567-933475c4a163] Running
	I0906 18:30:57.359885   13823 system_pods.go:74] duration metric: took 11.581782ms to wait for pod list to return data ...
	I0906 18:30:57.359894   13823 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:30:57.364154   13823 default_sa.go:45] found service account: "default"
	I0906 18:30:57.364173   13823 default_sa.go:55] duration metric: took 4.273217ms for default service account to be created ...
	I0906 18:30:57.364181   13823 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:30:57.373118   13823 system_pods.go:86] 18 kube-system pods found
	I0906 18:30:57.373150   13823 system_pods.go:89] "coredns-6f6b679f8f-d5d26" [8f56a285-a4a2-42b2-b904-86d4b92e1593] Running
	I0906 18:30:57.373165   13823 system_pods.go:89] "csi-hostpath-attacher-0" [077a752a-2398-4e94-b907-d0888261774c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:57.373175   13823 system_pods.go:89] "csi-hostpath-resizer-0" [4d49487b-d00b-4ee7-8007-fc440aad009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:30:57.373194   13823 system_pods.go:89] "csi-hostpathplugin-j7df9" [146029b8-76c4-479b-8217-00a90921e5d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:57.373202   13823 system_pods.go:89] "etcd-addons-959832" [2517086a-0030-456f-a07a-8973652d205c] Running
	I0906 18:30:57.373217   13823 system_pods.go:89] "kube-apiserver-addons-959832" [c93b4ce0-62b0-4e1f-9a98-76b6e7ad4fbc] Running
	I0906 18:30:57.373223   13823 system_pods.go:89] "kube-controller-manager-addons-959832" [3dc3e2e0-cdf7-4d83-8d8e-5cc86d87c45b] Running
	I0906 18:30:57.373227   13823 system_pods.go:89] "kube-ingress-dns-minikube" [1673a19c-a4a9-4d9d-bda1-e073fb44b3d8] Running
	I0906 18:30:57.373230   13823 system_pods.go:89] "kube-proxy-df5wg" [f92f8a67-fa25-410a-b7f6-928c602e53e5] Running
	I0906 18:30:57.373237   13823 system_pods.go:89] "kube-scheduler-addons-959832" [0a2458fe-333d-4ca7-b2ab-c58159f3a491] Running
	I0906 18:30:57.373242   13823 system_pods.go:89] "metrics-server-84c5f94fbc-flnx5" [01d423d8-1a69-47b2-be5a-57dc6f3f7268] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:57.373246   13823 system_pods.go:89] "nvidia-device-plugin-daemonset-nsxpz" [c35f7718-6879-4edb-9a8b-5b4a82ad2a7c] Running
	I0906 18:30:57.373252   13823 system_pods.go:89] "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:57.373257   13823 system_pods.go:89] "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:57.373264   13823 system_pods.go:89] "snapshot-controller-56fcc65765-db2j5" [afcb8d14-41d7-444b-b16d-496ca520ee39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.373273   13823 system_pods.go:89] "snapshot-controller-56fcc65765-jjdrv" [d3df181f-bfa3-4ef4-9767-ecc84c335cc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.373280   13823 system_pods.go:89] "storage-provisioner" [a837ebf7-7140-4baa-8b93-ea556996b204] Running
	I0906 18:30:57.373287   13823 system_pods.go:89] "tiller-deploy-b48cc5f79-d2ggh" [5951b042-9892-4eb8-b567-933475c4a163] Running
	I0906 18:30:57.373299   13823 system_pods.go:126] duration metric: took 9.109597ms to wait for k8s-apps to be running ...
	I0906 18:30:57.373309   13823 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:30:57.373355   13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:30:57.425478   13823 system_svc.go:56] duration metric: took 52.162346ms WaitForService to wait for kubelet
	I0906 18:30:57.425503   13823 kubeadm.go:582] duration metric: took 39.586136805s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:30:57.425533   13823 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:30:57.428818   13823 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:30:57.428842   13823 node_conditions.go:123] node cpu capacity is 2
	I0906 18:30:57.428863   13823 node_conditions.go:105] duration metric: took 3.314164ms to run NodePressure ...
	I0906 18:30:57.428878   13823 start.go:241] waiting for startup goroutines ...
	I0906 18:30:57.495273   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.495869   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.769593   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:57.798564   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.995122   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.995468   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.270153   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:58.299032   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.495028   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.495638   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.770199   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:58.797952   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.994635   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.995409   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.269612   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:59.298532   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.494666   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.495202   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:59.769637   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:59.799716   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.995110   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.997059   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.269925   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:00.299168   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.495168   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.495452   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.769831   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:00.798879   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.994356   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.995338   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:01.270323   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:01.298809   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.497749   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:01.509994   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.196171   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:02.197232   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.197446   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.198219   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.269772   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:02.299913   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.495441   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.496083   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.770038   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:02.800728   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.995143   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.995393   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.269175   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:03.298453   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.495672   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.495941   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.769214   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:03.798100   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.996193   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.996547   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.270229   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:04.300339   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.495048   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:04.495208   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.769698   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:04.798488   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.000395   13823 kapi.go:107] duration metric: took 37.509684094s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 18:31:05.000674   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.270104   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:05.297638   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.495343   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.770543   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:05.800954   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.994937   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.270489   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:06.299401   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.495523   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.775824   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:06.804605   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.000907   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.281094   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:07.306915   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.818623   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:07.820944   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.821122   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.994968   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.269992   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:08.298837   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.493945   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.769482   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:08.798377   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.994691   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.269835   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:09.299230   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.502957   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.769997   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:09.798765   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.127650   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.275919   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:10.300104   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.495617   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.769823   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:10.798656   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.995288   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.270073   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:11.299546   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.494131   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.771059   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:11.799920   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.995856   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.274737   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:12.299392   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.494262   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.769625   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:12.798619   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.995358   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.316812   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:13.317852   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.495815   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.769181   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:13.799259   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.995199   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.276613   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:14.379012   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.494898   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.770331   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:14.798773   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.995445   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.272540   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:15.301141   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.495285   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.770353   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:15.798730   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.994520   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.270657   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:16.300620   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.494263   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.770371   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:16.799256   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.994749   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.269747   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:17.298951   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.494719   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.769832   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:17.799470   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.994977   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.269720   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:18.310969   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.494867   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.769348   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:18.798225   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.994850   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.282829   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:19.384038   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.497045   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.770599   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:19.801611   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.996550   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.270037   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:20.311775   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.498768   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.769965   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:20.799204   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.997161   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.270035   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:21.299010   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.494660   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.769290   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:21.798619   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.994674   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.269883   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:22.300295   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.496723   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.771097   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:22.799152   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.013066   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.270485   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:23.299028   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.496372   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.770017   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:23.801362   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.996357   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:24.270445   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:24.299776   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:24.494072   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.030314   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:25.030783   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.031442   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.269910   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:25.371610   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.494715   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.770973   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:25.799735   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.994854   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.270976   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:26.299500   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.494510   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.770729   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:26.873976   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.993699   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.269916   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:27.299203   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:27.494353   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.771154   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:27.798428   13823 kapi.go:107] duration metric: took 58.504619679s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 18:31:27.996381   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.271088   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:28.493970   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.769758   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:28.994788   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.271720   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:29.496574   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.770127   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:29.994752   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.464639   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:30.495124   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.770101   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:30.995408   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.270144   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:31.495730   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.769464   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:31.996345   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.269861   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:32.495930   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.768939   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:32.996483   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.269235   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:33.494459   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.769303   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:33.994740   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.270162   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:34.494209   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.772239   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:34.995450   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.270037   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:35.494858   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.770518   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:35.994084   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.270405   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:36.496230   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.770326   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:36.994330   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.270147   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:37.493620   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.778857   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:38.113592   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.270475   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:38.494284   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.769614   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:39.006516   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:39.273731   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:39.495548   13823 kapi.go:107] duration metric: took 1m12.005524271s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 18:31:39.770852   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:40.269133   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:40.769688   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:41.270179   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:41.769459   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:42.270714   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:42.770252   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:43.270294   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:43.770209   13823 kapi.go:107] duration metric: took 1m12.504314576s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 18:31:43.771902   13823 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-959832 cluster.
	I0906 18:31:43.773493   13823 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 18:31:43.774994   13823 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 18:31:43.776439   13823 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, default-storageclass, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0906 18:31:43.778228   13823 addons.go:510] duration metric: took 1m25.938813235s for enable addons: enabled=[storage-provisioner ingress-dns default-storageclass nvidia-device-plugin cloud-spanner metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0906 18:31:43.778280   13823 start.go:246] waiting for cluster config update ...
	I0906 18:31:43.778303   13823 start.go:255] writing updated cluster config ...
	I0906 18:31:43.778560   13823 ssh_runner.go:195] Run: rm -f paused
	I0906 18:31:43.828681   13823 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 18:31:43.830792   13823 out.go:177] * Done! kubectl is now configured to use "addons-959832" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.601041055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648160601014340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bceed5d-4dad-4eac-9cad-ef989eb34849 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.601629520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19a2e2ad-56b1-4343-b092-f6ffcec554bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.601684140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19a2e2ad-56b1-4343-b092-f6ffcec554bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.602249553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a77b0e39569e432e0f0abecfc0d0dc295080be9aacb137b80a5872ba71c5293,PodSandboxId:fe6b4e93538c724751157666e66ca4abedbda45f250c23a4f6257b68cdebda47,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725648151296825106,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-d7bkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4254132-d806-4728-8fb3-6eb98f48b868,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb
1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018a
b909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-42b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpe
c{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSp
ec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698
dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19a2e2ad-56b1-4343-b092-f6ffcec554bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.642020063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d64dabe-2eb3-49b7-b64f-9c60b8006a86 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.642088607Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d64dabe-2eb3-49b7-b64f-9c60b8006a86 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.643085996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99e35dfd-9775-45e0-a714-79db5374ab96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.646971826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648160646941272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99e35dfd-9775-45e0-a714-79db5374ab96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.651248197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c3b8412-0a18-4154-a76a-454317e34391 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.651481538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c3b8412-0a18-4154-a76a-454317e34391 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.652105130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a77b0e39569e432e0f0abecfc0d0dc295080be9aacb137b80a5872ba71c5293,PodSandboxId:fe6b4e93538c724751157666e66ca4abedbda45f250c23a4f6257b68cdebda47,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725648151296825106,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-d7bkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4254132-d806-4728-8fb3-6eb98f48b868,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb
1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018a
b909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-42b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpe
c{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSp
ec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698
dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c3b8412-0a18-4154-a76a-454317e34391 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.687957212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6af0095a-41bb-4972-9ee6-2e4a0fbbf5a9 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.688048821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6af0095a-41bb-4972-9ee6-2e4a0fbbf5a9 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.689699552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0bd8df9-3f3f-4ac6-8b10-f461657dc8a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.691704776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648160691676512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0bd8df9-3f3f-4ac6-8b10-f461657dc8a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.692259062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acf6a064-b600-4087-b947-a0447301c3b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.692331600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=acf6a064-b600-4087-b947-a0447301c3b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.692690683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a77b0e39569e432e0f0abecfc0d0dc295080be9aacb137b80a5872ba71c5293,PodSandboxId:fe6b4e93538c724751157666e66ca4abedbda45f250c23a4f6257b68cdebda47,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725648151296825106,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-d7bkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4254132-d806-4728-8fb3-6eb98f48b868,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb
1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018a
b909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-42b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpe
c{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSp
ec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698
dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=acf6a064-b600-4087-b947-a0447301c3b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.738869535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=245f192d-b18c-4545-8664-b83e2387a998 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.738945195Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=245f192d-b18c-4545-8664-b83e2387a998 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.740104647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7dc1d05-e602-4cf7-b5ea-8c8bdaa54ac1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.741314974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648160741289953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7dc1d05-e602-4cf7-b5ea-8c8bdaa54ac1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.742042163Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8f1dbd8-5b63-4d65-9a62-f1450f934617 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.742100051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8f1dbd8-5b63-4d65-9a62-f1450f934617 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:42:40 addons-959832 crio[670]: time="2024-09-06 18:42:40.742614464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a77b0e39569e432e0f0abecfc0d0dc295080be9aacb137b80a5872ba71c5293,PodSandboxId:fe6b4e93538c724751157666e66ca4abedbda45f250c23a4f6257b68cdebda47,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725648151296825106,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-d7bkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4254132-d806-4728-8fb3-6eb98f48b868,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9dae7d0e5426c522d916326ed5310de8b20aa8b1ecadc4c59930e1fb4b90f40,PodSandboxId:09518ced68465a0aa521b483bb04e0b5ce62a2154edea2d4a4f4d656fb1c544e,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647489366892380,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h6cwj,i
o.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4c6b718a-631e-48a3-af85-922d1967a093,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1aec73f0b154e69b051134a94658aa7595309268f98617f95f08509ed80f285,PodSandboxId:d305340c168514573731896a71374ae3c61b68b91fc7a9a254ebb89b09263fda,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1725647475257644805,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name:
ingress-nginx-admission-create-gbh5k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e704f376-d431-411d-a81b-4625e16fb5bb,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.conta
iner.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb
1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6
e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018a
b909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-42b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpe
c{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSp
ec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698
dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8f1dbd8-5b63-4d65-9a62-f1450f934617 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2a77b0e39569e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app           0                   fe6b4e93538c7       hello-world-app-55bf9c44b4-d7bkf
	47ff4cd5a2010       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   e9d551110687a       nginx
	bff22acf8afe6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 10 minutes ago      Running             gcp-auth                  0                   6009e3b23d6b9       gcp-auth-89d5ffd79-wbp4z
	b9dae7d0e5426       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             11 minutes ago      Exited              patch                     2                   09518ced68465       ingress-nginx-admission-patch-h6cwj
	f1aec73f0b154       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   d305340c16851       ingress-nginx-admission-create-gbh5k
	dbdca73cd5f41       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        11 minutes ago      Running             metrics-server            0                   ebd17a7bfd07d       metrics-server-84c5f94fbc-flnx5
	d8e6b5740dfd9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             11 minutes ago      Running             local-path-provisioner    0                   bb57b9b0a87b0       local-path-provisioner-86d989889c-wmllc
	095caffa96df4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   fb03fe115a315       storage-provisioner
	daf771eda93ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             12 minutes ago      Running             coredns                   0                   cf16f9b0ce0a6       coredns-6f6b679f8f-d5d26
	f62f176bebb98       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             12 minutes ago      Running             kube-proxy                0                   a16d4e27651e7       kube-proxy-df5wg
	0976f654c6450       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             12 minutes ago      Running             kube-controller-manager   0                   08d02ee1f1b83       kube-controller-manager-addons-959832
	0062bd6dff511       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             12 minutes ago      Running             kube-scheduler            0                   3810e200d7f2c       kube-scheduler-addons-959832
	14011f30e4b49       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago      Running             etcd                      0                   1340e66e90fd2       etcd-addons-959832
	f03b3137e10ab       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             12 minutes ago      Running             kube-apiserver            0                   6a4a01ed6ac27       kube-apiserver-addons-959832
	
	
	==> coredns [daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025] <==
	[INFO] 10.244.0.8:53109 - 30493 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000031299s
	[INFO] 10.244.0.8:51164 - 21323 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000073777s
	[INFO] 10.244.0.8:51164 - 9807 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00003634s
	[INFO] 10.244.0.8:33912 - 61080 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030797s
	[INFO] 10.244.0.8:33912 - 53146 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000256s
	[INFO] 10.244.0.8:51671 - 8759 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027086s
	[INFO] 10.244.0.8:51671 - 2357 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069078s
	[INFO] 10.244.0.8:58937 - 47939 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000029815s
	[INFO] 10.244.0.8:58937 - 55677 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000025038s
	[INFO] 10.244.0.8:59574 - 33097 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000055434s
	[INFO] 10.244.0.8:59574 - 49222 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000032883s
	[INFO] 10.244.0.8:34345 - 33033 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025905s
	[INFO] 10.244.0.8:34345 - 61711 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025782s
	[INFO] 10.244.0.8:40854 - 19935 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000024436s
	[INFO] 10.244.0.8:40854 - 16861 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000022079s
	[INFO] 10.244.0.8:54975 - 41823 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033452s
	[INFO] 10.244.0.8:54975 - 6745 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041358s
	[INFO] 10.244.0.22:39608 - 5840 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000623407s
	[INFO] 10.244.0.22:47451 - 10373 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000773196s
	[INFO] 10.244.0.22:47147 - 43920 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096203s
	[INFO] 10.244.0.22:37201 - 19027 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000052062s
	[INFO] 10.244.0.22:51583 - 38377 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070102s
	[INFO] 10.244.0.22:37854 - 16491 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000049501s
	[INFO] 10.244.0.22:55914 - 7247 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000846443s
	[INFO] 10.244.0.22:51764 - 46657 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001169257s
	
	
	==> describe nodes <==
	Name:               addons-959832
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-959832
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=addons-959832
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_30_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-959832
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:30:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-959832
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:42:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:41:17 +0000   Fri, 06 Sep 2024 18:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:41:17 +0000   Fri, 06 Sep 2024 18:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:41:17 +0000   Fri, 06 Sep 2024 18:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:41:17 +0000   Fri, 06 Sep 2024 18:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    addons-959832
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 789fcfcd81af4b61a593ac3d592db28c
	  System UUID:                789fcfcd-81af-4b61-a593-ac3d592db28c
	  Boot ID:                    ca224247-03d2-489f-a0b8-0a2fbb84d9da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-55bf9c44b4-d7bkf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gcp-auth                    gcp-auth-89d5ffd79-wbp4z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-d5d26                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-959832                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-959832               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-959832      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-df5wg                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-959832               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-flnx5            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-wmllc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-959832 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-959832 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-959832 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-959832 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-959832 event: Registered Node addons-959832 in Controller
	
	
	==> dmesg <==
	[Sep 6 18:31] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.023954] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.411470] kauditd_printk_skb: 60 callbacks suppressed
	[  +6.032630] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.000760] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.371405] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.464629] kauditd_printk_skb: 42 callbacks suppressed
	[  +9.171733] kauditd_printk_skb: 9 callbacks suppressed
	[Sep 6 18:32] kauditd_printk_skb: 30 callbacks suppressed
	[Sep 6 18:34] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:39] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:40] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.061671] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.069446] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.609090] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.878882] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.370924] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.422494] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.580656] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.557034] kauditd_printk_skb: 4 callbacks suppressed
	[Sep 6 18:41] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.420844] kauditd_printk_skb: 9 callbacks suppressed
	[Sep 6 18:42] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.637554] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d] <==
	{"level":"info","ts":"2024-09-06T18:31:30.449052Z","caller":"traceutil/trace.go:171","msg":"trace[147865116] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"195.658402ms","start":"2024-09-06T18:31:30.253384Z","end":"2024-09-06T18:31:30.449042Z","steps":["trace[147865116] 'process raft request'  (duration: 195.381086ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:30.449255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.027216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:30.449308Z","caller":"traceutil/trace.go:171","msg":"trace[1936020184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"194.091492ms","start":"2024-09-06T18:31:30.255208Z","end":"2024-09-06T18:31:30.449299Z","steps":["trace[1936020184] 'agreement among raft nodes before linearized reading'  (duration: 194.016579ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:38.095195Z","caller":"traceutil/trace.go:171","msg":"trace[688394279] linearizableReadLoop","detail":"{readStateIndex:1162; appliedIndex:1161; }","duration":"115.853572ms","start":"2024-09-06T18:31:37.979325Z","end":"2024-09-06T18:31:38.095179Z","steps":["trace[688394279] 'read index received'  (duration: 115.687137ms)","trace[688394279] 'applied index is now lower than readState.Index'  (duration: 165.625µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-06T18:31:38.095479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.064057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:38.095541Z","caller":"traceutil/trace.go:171","msg":"trace[1813618553] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1130; }","duration":"116.211558ms","start":"2024-09-06T18:31:37.979321Z","end":"2024-09-06T18:31:38.095532Z","steps":["trace[1813618553] 'agreement among raft nodes before linearized reading'  (duration: 116.005384ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:38.095837Z","caller":"traceutil/trace.go:171","msg":"trace[2080125568] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"147.639748ms","start":"2024-09-06T18:31:37.948183Z","end":"2024-09-06T18:31:38.095822Z","steps":["trace[2080125568] 'process raft request'  (duration: 146.880754ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:42.416683Z","caller":"traceutil/trace.go:171","msg":"trace[91810177] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"156.247568ms","start":"2024-09-06T18:31:42.260415Z","end":"2024-09-06T18:31:42.416663Z","steps":["trace[91810177] 'process raft request'  (duration: 155.748211ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:40:07.229181Z","caller":"traceutil/trace.go:171","msg":"trace[484312089] linearizableReadLoop","detail":"{readStateIndex:2159; appliedIndex:2158; }","duration":"409.788256ms","start":"2024-09-06T18:40:06.819346Z","end":"2024-09-06T18:40:07.229135Z","steps":["trace[484312089] 'read index received'  (duration: 409.628912ms)","trace[484312089] 'applied index is now lower than readState.Index'  (duration: 158.846µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:40:07.229379Z","caller":"traceutil/trace.go:171","msg":"trace[1656832041] transaction","detail":"{read_only:false; response_revision:2017; number_of_response:1; }","duration":"491.002048ms","start":"2024-09-06T18:40:06.738356Z","end":"2024-09-06T18:40:07.229358Z","steps":["trace[1656832041] 'process raft request'  (duration: 490.652338ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:40:07.229604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.584673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:40:07.229643Z","caller":"traceutil/trace.go:171","msg":"trace[1915074209] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2017; }","duration":"248.626111ms","start":"2024-09-06T18:40:06.981009Z","end":"2024-09-06T18:40:07.229635Z","steps":["trace[1915074209] 'agreement among raft nodes before linearized reading'  (duration: 248.574709ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:40:07.229740Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:40:06.738339Z","time spent":"491.264052ms","remote":"127.0.0.1:39516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-959832\" mod_revision:1958 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-959832\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-959832\" > >"}
	{"level":"warn","ts":"2024-09-06T18:40:07.229558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"410.139686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-06T18:40:07.229900Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.345839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-06T18:40:07.229941Z","caller":"traceutil/trace.go:171","msg":"trace[1213588532] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2017; }","duration":"183.385298ms","start":"2024-09-06T18:40:07.046548Z","end":"2024-09-06T18:40:07.229933Z","steps":["trace[1213588532] 'agreement among raft nodes before linearized reading'  (duration: 183.300185ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:40:07.229918Z","caller":"traceutil/trace.go:171","msg":"trace[1459748069] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2017; }","duration":"410.570505ms","start":"2024-09-06T18:40:06.819339Z","end":"2024-09-06T18:40:07.229910Z","steps":["trace[1459748069] 'agreement among raft nodes before linearized reading'  (duration: 410.06832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:40:07.230002Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:40:06.819307Z","time spent":"410.688119ms","remote":"127.0.0.1:39260","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-09-06T18:40:09.281386Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1536}
	{"level":"info","ts":"2024-09-06T18:40:09.333184Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1536,"took":"51.266331ms","hash":4192817885,"current-db-size-bytes":6647808,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3444736,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-06T18:40:09.333251Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4192817885,"revision":1536,"compact-revision":-1}
	{"level":"info","ts":"2024-09-06T18:41:05.745354Z","caller":"traceutil/trace.go:171","msg":"trace[486873728] transaction","detail":"{read_only:false; response_revision:2438; number_of_response:1; }","duration":"152.706273ms","start":"2024-09-06T18:41:05.592614Z","end":"2024-09-06T18:41:05.745320Z","steps":["trace[486873728] 'process raft request'  (duration: 152.60606ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:41:37.968550Z","caller":"traceutil/trace.go:171","msg":"trace[849290624] linearizableReadLoop","detail":"{readStateIndex:2693; appliedIndex:2692; }","duration":"150.307732ms","start":"2024-09-06T18:41:37.818226Z","end":"2024-09-06T18:41:37.968534Z","steps":["trace[849290624] 'read index received'  (duration: 148.672472ms)","trace[849290624] 'applied index is now lower than readState.Index'  (duration: 1.634577ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-06T18:41:37.968874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.568984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:41:37.968936Z","caller":"traceutil/trace.go:171","msg":"trace[1335768279] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2517; }","duration":"150.706196ms","start":"2024-09-06T18:41:37.818222Z","end":"2024-09-06T18:41:37.968928Z","steps":["trace[1335768279] 'agreement among raft nodes before linearized reading'  (duration: 150.544871ms)"],"step_count":1}
	
	
	==> gcp-auth [bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961] <==
	2024/09/06 18:31:44 Ready to write response ...
	2024/09/06 18:39:57 Ready to marshal response ...
	2024/09/06 18:39:57 Ready to write response ...
	2024/09/06 18:40:01 Ready to marshal response ...
	2024/09/06 18:40:01 Ready to write response ...
	2024/09/06 18:40:03 Ready to marshal response ...
	2024/09/06 18:40:03 Ready to write response ...
	2024/09/06 18:40:12 Ready to marshal response ...
	2024/09/06 18:40:12 Ready to write response ...
	2024/09/06 18:40:20 Ready to marshal response ...
	2024/09/06 18:40:20 Ready to write response ...
	2024/09/06 18:40:36 Ready to marshal response ...
	2024/09/06 18:40:36 Ready to write response ...
	2024/09/06 18:40:36 Ready to marshal response ...
	2024/09/06 18:40:36 Ready to write response ...
	2024/09/06 18:40:43 Ready to marshal response ...
	2024/09/06 18:40:43 Ready to write response ...
	2024/09/06 18:41:01 Ready to marshal response ...
	2024/09/06 18:41:01 Ready to write response ...
	2024/09/06 18:41:01 Ready to marshal response ...
	2024/09/06 18:41:01 Ready to write response ...
	2024/09/06 18:41:01 Ready to marshal response ...
	2024/09/06 18:41:01 Ready to write response ...
	2024/09/06 18:42:29 Ready to marshal response ...
	2024/09/06 18:42:29 Ready to write response ...
	
	
	==> kernel <==
	 18:42:41 up 13 min,  0 users,  load average: 1.13, 1.13, 0.73
	Linux addons-959832 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9] <==
	E0906 18:32:14.711932       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0906 18:32:14.714123       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
	E0906 18:32:14.719474       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
	I0906 18:32:14.784984       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0906 18:39:53.218243       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0906 18:39:54.261305       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0906 18:40:11.987036       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0906 18:40:12.163983       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.110.216"}
	I0906 18:40:13.051545       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0906 18:40:35.983222       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:35.983535       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:40:36.005118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:36.005246       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:40:36.035687       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:36.035737       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:40:36.054186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:36.054461       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0906 18:40:37.036569       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 18:40:37.057021       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0906 18:40:37.073802       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0906 18:41:01.741248       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.147.21"}
	I0906 18:42:30.199507       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.38.159"}
	
	
	==> kube-controller-manager [0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49] <==
	W0906 18:41:20.022059       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:41:20.022222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:41:22.445206       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0906 18:41:51.344754       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:41:51.344843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:03.463581       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:03.463640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:03.568948       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:03.569074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:42:06.395903       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:06.395941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:42:30.022795       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="55.615643ms"
	I0906 18:42:30.052831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="23.243677ms"
	I0906 18:42:30.052910       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.521µs"
	I0906 18:42:30.052949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.33µs"
	I0906 18:42:30.058546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="142.01µs"
	W0906 18:42:31.950568       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:31.950680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:42:32.113230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="21.325369ms"
	I0906 18:42:32.113315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.597µs"
	I0906 18:42:32.775898       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0906 18:42:32.780994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="3.812µs"
	I0906 18:42:32.787609       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0906 18:42:34.803575       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:34.803731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:30:20.895600       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:30:20.905684       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.98"]
	E0906 18:30:20.905767       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:30:20.981385       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:30:20.981522       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:30:20.981552       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:30:20.986309       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:30:20.986680       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:30:20.986707       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:30:20.988245       1 config.go:197] "Starting service config controller"
	I0906 18:30:20.988269       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:30:20.988299       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:30:20.988303       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:30:20.988869       1 config.go:326] "Starting node config controller"
	I0906 18:30:20.988881       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:30:21.089002       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:30:21.089043       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:30:21.089077       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832] <==
	W0906 18:30:10.632826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:10.632881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:10.632992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:30:10.633043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:10.633145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 18:30:10.633198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:10.633303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:30:10.633365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.559856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 18:30:11.559915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.591626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:11.591724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.593014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:30:11.593712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.624825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 18:30:11.625533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.640090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:11.640140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.646831       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 18:30:11.646890       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0906 18:30:11.875922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 18:30:11.876131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.954173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 18:30:11.954234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0906 18:30:14.512534       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:42:31 addons-959832 kubelet[1215]: I0906 18:42:31.413655    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lc722\" (UniqueName: \"kubernetes.io/projected/1673a19c-a4a9-4d9d-bda1-e073fb44b3d8-kube-api-access-lc722\") on node \"addons-959832\" DevicePath \"\""
	Sep 06 18:42:32 addons-959832 kubelet[1215]: I0906 18:42:32.072905    1215 scope.go:117] "RemoveContainer" containerID="3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218"
	Sep 06 18:42:32 addons-959832 kubelet[1215]: I0906 18:42:32.088113    1215 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-d7bkf" podStartSLOduration=1.412805139 podStartE2EDuration="2.088087749s" podCreationTimestamp="2024-09-06 18:42:30 +0000 UTC" firstStartedPulling="2024-09-06 18:42:30.606344076 +0000 UTC m=+737.398611160" lastFinishedPulling="2024-09-06 18:42:31.281626685 +0000 UTC m=+738.073893770" observedRunningTime="2024-09-06 18:42:32.087193564 +0000 UTC m=+738.879460668" watchObservedRunningTime="2024-09-06 18:42:32.088087749 +0000 UTC m=+738.880354853"
	Sep 06 18:42:32 addons-959832 kubelet[1215]: I0906 18:42:32.102770    1215 scope.go:117] "RemoveContainer" containerID="3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218"
	Sep 06 18:42:32 addons-959832 kubelet[1215]: E0906 18:42:32.103693    1215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218\": container with ID starting with 3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218 not found: ID does not exist" containerID="3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218"
	Sep 06 18:42:32 addons-959832 kubelet[1215]: I0906 18:42:32.103748    1215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218"} err="failed to get container status \"3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218\": rpc error: code = NotFound desc = could not find container \"3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218\": container with ID starting with 3be35f5c5847b38462930ea0c9c2c00be43b3e9ad8fc484fd64c7af4f1fcd218 not found: ID does not exist"
	Sep 06 18:42:33 addons-959832 kubelet[1215]: I0906 18:42:33.340363    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1673a19c-a4a9-4d9d-bda1-e073fb44b3d8" path="/var/lib/kubelet/pods/1673a19c-a4a9-4d9d-bda1-e073fb44b3d8/volumes"
	Sep 06 18:42:33 addons-959832 kubelet[1215]: I0906 18:42:33.341194    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6b718a-631e-48a3-af85-922d1967a093" path="/var/lib/kubelet/pods/4c6b718a-631e-48a3-af85-922d1967a093/volumes"
	Sep 06 18:42:33 addons-959832 kubelet[1215]: I0906 18:42:33.341759    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e704f376-d431-411d-a81b-4625e16fb5bb" path="/var/lib/kubelet/pods/e704f376-d431-411d-a81b-4625e16fb5bb/volumes"
	Sep 06 18:42:33 addons-959832 kubelet[1215]: E0906 18:42:33.818790    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648153816609649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:42:33 addons-959832 kubelet[1215]: E0906 18:42:33.818945    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648153816609649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:42:36 addons-959832 kubelet[1215]: I0906 18:42:36.050521    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlnhd\" (UniqueName: \"kubernetes.io/projected/834d08fb-b9a8-4a67-b022-fec07c4b5fa9-kube-api-access-dlnhd\") pod \"834d08fb-b9a8-4a67-b022-fec07c4b5fa9\" (UID: \"834d08fb-b9a8-4a67-b022-fec07c4b5fa9\") "
	Sep 06 18:42:36 addons-959832 kubelet[1215]: I0906 18:42:36.050581    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/834d08fb-b9a8-4a67-b022-fec07c4b5fa9-webhook-cert\") pod \"834d08fb-b9a8-4a67-b022-fec07c4b5fa9\" (UID: \"834d08fb-b9a8-4a67-b022-fec07c4b5fa9\") "
	Sep 06 18:42:36 addons-959832 kubelet[1215]: I0906 18:42:36.053289    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834d08fb-b9a8-4a67-b022-fec07c4b5fa9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "834d08fb-b9a8-4a67-b022-fec07c4b5fa9" (UID: "834d08fb-b9a8-4a67-b022-fec07c4b5fa9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 06 18:42:36 addons-959832 kubelet[1215]: I0906 18:42:36.053499    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834d08fb-b9a8-4a67-b022-fec07c4b5fa9-kube-api-access-dlnhd" (OuterVolumeSpecName: "kube-api-access-dlnhd") pod "834d08fb-b9a8-4a67-b022-fec07c4b5fa9" (UID: "834d08fb-b9a8-4a67-b022-fec07c4b5fa9"). InnerVolumeSpecName "kube-api-access-dlnhd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:42:36 addons-959832 kubelet[1215]: I0906 18:42:36.097299    1215 scope.go:117] "RemoveContainer" containerID="2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e"
	Sep 06 18:42:36 addons-959832 kubelet[1215]: I0906 18:42:36.120713    1215 scope.go:117] "RemoveContainer" containerID="2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e"
	Sep 06 18:42:36 addons-959832 kubelet[1215]: E0906 18:42:36.121216    1215 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e\": container with ID starting with 2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e not found: ID does not exist" containerID="2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e"
	Sep 06 18:42:36 addons-959832 kubelet[1215]: I0906 18:42:36.121263    1215 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e"} err="failed to get container status \"2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e\": rpc error: code = NotFound desc = could not find container \"2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e\": container with ID starting with 2f6f1328251075eb865637481cca480047c02c28230b3b2944a26f810dec856e not found: ID does not exist"
	Sep 06 18:42:36 addons-959832 kubelet[1215]: I0906 18:42:36.151269    1215 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/834d08fb-b9a8-4a67-b022-fec07c4b5fa9-webhook-cert\") on node \"addons-959832\" DevicePath \"\""
	Sep 06 18:42:36 addons-959832 kubelet[1215]: I0906 18:42:36.151338    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dlnhd\" (UniqueName: \"kubernetes.io/projected/834d08fb-b9a8-4a67-b022-fec07c4b5fa9-kube-api-access-dlnhd\") on node \"addons-959832\" DevicePath \"\""
	Sep 06 18:42:37 addons-959832 kubelet[1215]: I0906 18:42:37.340047    1215 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="834d08fb-b9a8-4a67-b022-fec07c4b5fa9" path="/var/lib/kubelet/pods/834d08fb-b9a8-4a67-b022-fec07c4b5fa9/volumes"
	Sep 06 18:42:37 addons-959832 kubelet[1215]: E0906 18:42:37.449697    1215 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 06 18:42:37 addons-959832 kubelet[1215]: E0906 18:42:37.449944    1215 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:busybox,Image:gcr.io/k8s-minikube/busybox:1.28.4-glibc,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8sxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name
:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod busybox_default(1c130620-63bc-4232-b463-81e6378edb12): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: authentication failed" logger="UnhandledError"
	Sep 06 18:42:37 addons-959832 kubelet[1215]: E0906 18:42:37.451507    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: authentication failed\"" pod="default/busybox" podUID="1c130620-63bc-4232-b463-81e6378edb12"
	
	
	==> storage-provisioner [095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120] <==
	I0906 18:30:26.339092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:30:26.364532       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:30:26.364614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:30:26.389908       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:30:26.390911       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3!
	I0906 18:30:26.391024       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c870ecaa-1488-487e-a063-0e518015e13e", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3 became leader
	I0906 18:30:26.492036       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-959832 -n addons-959832
helpers_test.go:261: (dbg) Run:  kubectl --context addons-959832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-959832 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-959832 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-959832/192.168.39.98
	Start Time:       Fri, 06 Sep 2024 18:31:44 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n8sxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n8sxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/busybox to addons-959832
	  Normal   Pulling    9m30s (x4 over 10m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m30s (x4 over 10m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m30s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m19s (x6 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    53s (x42 over 10m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (150.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (316.19s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.718852ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-flnx5" [01d423d8-1a69-47b2-be5a-57dc6f3f7268] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004844204s
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (69.190494ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 9m34.812442236s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (63.125441ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 9m37.467710148s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (62.575865ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 9m41.296742652s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (75.377372ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 9m51.21620444s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (68.928593ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 10m3.20139732s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (67.217543ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 10m23.883576909s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (70.169744ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 10m45.755445835s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (63.32059ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 11m15.016701491s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (60.671839ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 12m5.563221856s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (62.79727ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 13m26.462569259s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-959832 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-959832 top pods -n kube-system: exit status 1 (65.903327ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-d5d26, age: 14m42.072889655s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-959832 -n addons-959832
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-959832 logs -n 25: (1.353149631s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-726386                                                                     | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-693029                                                                     | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-071210 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | binary-mirror-071210                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42457                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-071210                                                                     | binary-mirror-071210 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| addons  | disable dashboard -p                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-959832 --wait=true                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:39 UTC | 06 Sep 24 18:39 UTC |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-959832 ssh curl -s                                                                   | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-959832 addons                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-959832 addons                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-959832 ssh cat                                                                       | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | /opt/local-path-provisioner/pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-959832 ip                                                                            | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:40 UTC | 06 Sep 24 18:40 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:41 UTC | 06 Sep 24 18:41 UTC |
	|         | -p addons-959832                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:41 UTC | 06 Sep 24 18:41 UTC |
	|         | -p addons-959832                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:41 UTC | 06 Sep 24 18:41 UTC |
	|         | addons-959832                                                                               |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:41 UTC | 06 Sep 24 18:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-959832 ip                                                                            | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-959832 addons disable                                                                | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-959832 addons                                                                        | addons-959832        | jenkins | v1.34.0 | 06 Sep 24 18:45 UTC | 06 Sep 24 18:45 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:30.440394   13823 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:30.440643   13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:30.440652   13823 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:30.440656   13823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:30.440824   13823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:29:30.441460   13823 out.go:352] Setting JSON to false
	I0906 18:29:30.442255   13823 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":719,"bootTime":1725646651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:30.442312   13823 start.go:139] virtualization: kvm guest
	I0906 18:29:30.444228   13823 out.go:177] * [addons-959832] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:29:30.445334   13823 notify.go:220] Checking for updates...
	I0906 18:29:30.445342   13823 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:29:30.446652   13823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:30.448060   13823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:29:30.449528   13823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:30.450779   13823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:29:30.451986   13823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:29:30.453700   13823 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:30.485465   13823 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 18:29:30.486701   13823 start.go:297] selected driver: kvm2
	I0906 18:29:30.486713   13823 start.go:901] validating driver "kvm2" against <nil>
	I0906 18:29:30.486727   13823 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:29:30.487397   13823 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:30.487478   13823 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 18:29:30.502694   13823 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 18:29:30.502738   13823 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:30.502931   13823 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:29:30.502959   13823 cni.go:84] Creating CNI manager for ""
	I0906 18:29:30.502966   13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:29:30.502978   13823 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:30.503026   13823 start.go:340] cluster config:
	{Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0906 18:29:30.503117   13823 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:30.504979   13823 out.go:177] * Starting "addons-959832" primary control-plane node in "addons-959832" cluster
	I0906 18:29:30.506126   13823 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:29:30.506168   13823 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 18:29:30.506178   13823 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:30.506272   13823 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 18:29:30.506286   13823 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 18:29:30.506559   13823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json ...
	I0906 18:29:30.506577   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json: {Name:mkb043cbbb2997cf908fb60acd39795871d65137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:29:30.506698   13823 start.go:360] acquireMachinesLock for addons-959832: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:29:30.506741   13823 start.go:364] duration metric: took 31.601µs to acquireMachinesLock for "addons-959832"
	I0906 18:29:30.506759   13823 start.go:93] Provisioning new machine with config: &{Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:29:30.506820   13823 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 18:29:30.508432   13823 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0906 18:29:30.508550   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:29:30.508587   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:29:30.522987   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34483
	I0906 18:29:30.523384   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:29:30.523869   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:29:30.523890   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:29:30.524169   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:29:30.524345   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:30.524450   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:30.524591   13823 start.go:159] libmachine.API.Create for "addons-959832" (driver="kvm2")
	I0906 18:29:30.524624   13823 client.go:168] LocalClient.Create starting
	I0906 18:29:30.524668   13823 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 18:29:30.595679   13823 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 18:29:30.708441   13823 main.go:141] libmachine: Running pre-create checks...
	I0906 18:29:30.708464   13823 main.go:141] libmachine: (addons-959832) Calling .PreCreateCheck
	I0906 18:29:30.708957   13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
	I0906 18:29:30.709397   13823 main.go:141] libmachine: Creating machine...
	I0906 18:29:30.709410   13823 main.go:141] libmachine: (addons-959832) Calling .Create
	I0906 18:29:30.709556   13823 main.go:141] libmachine: (addons-959832) Creating KVM machine...
	I0906 18:29:30.710795   13823 main.go:141] libmachine: (addons-959832) DBG | found existing default KVM network
	I0906 18:29:30.711508   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:30.711378   13845 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0906 18:29:30.711570   13823 main.go:141] libmachine: (addons-959832) DBG | created network xml: 
	I0906 18:29:30.711607   13823 main.go:141] libmachine: (addons-959832) DBG | <network>
	I0906 18:29:30.711624   13823 main.go:141] libmachine: (addons-959832) DBG |   <name>mk-addons-959832</name>
	I0906 18:29:30.711646   13823 main.go:141] libmachine: (addons-959832) DBG |   <dns enable='no'/>
	I0906 18:29:30.711654   13823 main.go:141] libmachine: (addons-959832) DBG |   
	I0906 18:29:30.711661   13823 main.go:141] libmachine: (addons-959832) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0906 18:29:30.711668   13823 main.go:141] libmachine: (addons-959832) DBG |     <dhcp>
	I0906 18:29:30.711673   13823 main.go:141] libmachine: (addons-959832) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0906 18:29:30.711684   13823 main.go:141] libmachine: (addons-959832) DBG |     </dhcp>
	I0906 18:29:30.711691   13823 main.go:141] libmachine: (addons-959832) DBG |   </ip>
	I0906 18:29:30.711698   13823 main.go:141] libmachine: (addons-959832) DBG |   
	I0906 18:29:30.711706   13823 main.go:141] libmachine: (addons-959832) DBG | </network>
	I0906 18:29:30.711714   13823 main.go:141] libmachine: (addons-959832) DBG | 
	I0906 18:29:30.716914   13823 main.go:141] libmachine: (addons-959832) DBG | trying to create private KVM network mk-addons-959832 192.168.39.0/24...
	I0906 18:29:30.784502   13823 main.go:141] libmachine: (addons-959832) DBG | private KVM network mk-addons-959832 192.168.39.0/24 created
	I0906 18:29:30.784548   13823 main.go:141] libmachine: (addons-959832) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 ...
	I0906 18:29:30.784580   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:30.784495   13845 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:30.784596   13823 main.go:141] libmachine: (addons-959832) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:29:30.784621   13823 main.go:141] libmachine: (addons-959832) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:29:31.031605   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.031496   13845 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa...
	I0906 18:29:31.150285   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.150157   13845 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/addons-959832.rawdisk...
	I0906 18:29:31.150312   13823 main.go:141] libmachine: (addons-959832) DBG | Writing magic tar header
	I0906 18:29:31.150322   13823 main.go:141] libmachine: (addons-959832) DBG | Writing SSH key tar header
	I0906 18:29:31.150329   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:31.150306   13845 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 ...
	I0906 18:29:31.150514   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832
	I0906 18:29:31.150551   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 18:29:31.150582   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832 (perms=drwx------)
	I0906 18:29:31.150604   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:29:31.150630   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 18:29:31.150652   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 18:29:31.150664   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:31.150681   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 18:29:31.150694   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:29:31.150709   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:29:31.150726   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:29:31.150738   13823 main.go:141] libmachine: (addons-959832) DBG | Checking permissions on dir: /home
	I0906 18:29:31.150755   13823 main.go:141] libmachine: (addons-959832) DBG | Skipping /home - not owner
	I0906 18:29:31.150771   13823 main.go:141] libmachine: (addons-959832) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:29:31.150781   13823 main.go:141] libmachine: (addons-959832) Creating domain...
	I0906 18:29:31.151641   13823 main.go:141] libmachine: (addons-959832) define libvirt domain using xml: 
	I0906 18:29:31.151668   13823 main.go:141] libmachine: (addons-959832) <domain type='kvm'>
	I0906 18:29:31.151680   13823 main.go:141] libmachine: (addons-959832)   <name>addons-959832</name>
	I0906 18:29:31.151693   13823 main.go:141] libmachine: (addons-959832)   <memory unit='MiB'>4000</memory>
	I0906 18:29:31.151703   13823 main.go:141] libmachine: (addons-959832)   <vcpu>2</vcpu>
	I0906 18:29:31.151718   13823 main.go:141] libmachine: (addons-959832)   <features>
	I0906 18:29:31.151723   13823 main.go:141] libmachine: (addons-959832)     <acpi/>
	I0906 18:29:31.151727   13823 main.go:141] libmachine: (addons-959832)     <apic/>
	I0906 18:29:31.151736   13823 main.go:141] libmachine: (addons-959832)     <pae/>
	I0906 18:29:31.151741   13823 main.go:141] libmachine: (addons-959832)     
	I0906 18:29:31.151747   13823 main.go:141] libmachine: (addons-959832)   </features>
	I0906 18:29:31.151754   13823 main.go:141] libmachine: (addons-959832)   <cpu mode='host-passthrough'>
	I0906 18:29:31.151759   13823 main.go:141] libmachine: (addons-959832)   
	I0906 18:29:31.151772   13823 main.go:141] libmachine: (addons-959832)   </cpu>
	I0906 18:29:31.151779   13823 main.go:141] libmachine: (addons-959832)   <os>
	I0906 18:29:31.151788   13823 main.go:141] libmachine: (addons-959832)     <type>hvm</type>
	I0906 18:29:31.151795   13823 main.go:141] libmachine: (addons-959832)     <boot dev='cdrom'/>
	I0906 18:29:31.151801   13823 main.go:141] libmachine: (addons-959832)     <boot dev='hd'/>
	I0906 18:29:31.151808   13823 main.go:141] libmachine: (addons-959832)     <bootmenu enable='no'/>
	I0906 18:29:31.151812   13823 main.go:141] libmachine: (addons-959832)   </os>
	I0906 18:29:31.151818   13823 main.go:141] libmachine: (addons-959832)   <devices>
	I0906 18:29:31.151825   13823 main.go:141] libmachine: (addons-959832)     <disk type='file' device='cdrom'>
	I0906 18:29:31.151834   13823 main.go:141] libmachine: (addons-959832)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/boot2docker.iso'/>
	I0906 18:29:31.151841   13823 main.go:141] libmachine: (addons-959832)       <target dev='hdc' bus='scsi'/>
	I0906 18:29:31.151847   13823 main.go:141] libmachine: (addons-959832)       <readonly/>
	I0906 18:29:31.151853   13823 main.go:141] libmachine: (addons-959832)     </disk>
	I0906 18:29:31.151859   13823 main.go:141] libmachine: (addons-959832)     <disk type='file' device='disk'>
	I0906 18:29:31.151867   13823 main.go:141] libmachine: (addons-959832)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:29:31.151878   13823 main.go:141] libmachine: (addons-959832)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/addons-959832.rawdisk'/>
	I0906 18:29:31.151886   13823 main.go:141] libmachine: (addons-959832)       <target dev='hda' bus='virtio'/>
	I0906 18:29:31.151894   13823 main.go:141] libmachine: (addons-959832)     </disk>
	I0906 18:29:31.151899   13823 main.go:141] libmachine: (addons-959832)     <interface type='network'>
	I0906 18:29:31.151908   13823 main.go:141] libmachine: (addons-959832)       <source network='mk-addons-959832'/>
	I0906 18:29:31.151915   13823 main.go:141] libmachine: (addons-959832)       <model type='virtio'/>
	I0906 18:29:31.151923   13823 main.go:141] libmachine: (addons-959832)     </interface>
	I0906 18:29:31.151931   13823 main.go:141] libmachine: (addons-959832)     <interface type='network'>
	I0906 18:29:31.151957   13823 main.go:141] libmachine: (addons-959832)       <source network='default'/>
	I0906 18:29:31.151984   13823 main.go:141] libmachine: (addons-959832)       <model type='virtio'/>
	I0906 18:29:31.151993   13823 main.go:141] libmachine: (addons-959832)     </interface>
	I0906 18:29:31.152008   13823 main.go:141] libmachine: (addons-959832)     <serial type='pty'>
	I0906 18:29:31.152028   13823 main.go:141] libmachine: (addons-959832)       <target port='0'/>
	I0906 18:29:31.152046   13823 main.go:141] libmachine: (addons-959832)     </serial>
	I0906 18:29:31.152059   13823 main.go:141] libmachine: (addons-959832)     <console type='pty'>
	I0906 18:29:31.152070   13823 main.go:141] libmachine: (addons-959832)       <target type='serial' port='0'/>
	I0906 18:29:31.152078   13823 main.go:141] libmachine: (addons-959832)     </console>
	I0906 18:29:31.152086   13823 main.go:141] libmachine: (addons-959832)     <rng model='virtio'>
	I0906 18:29:31.152095   13823 main.go:141] libmachine: (addons-959832)       <backend model='random'>/dev/random</backend>
	I0906 18:29:31.152103   13823 main.go:141] libmachine: (addons-959832)     </rng>
	I0906 18:29:31.152113   13823 main.go:141] libmachine: (addons-959832)     
	I0906 18:29:31.152126   13823 main.go:141] libmachine: (addons-959832)     
	I0906 18:29:31.152138   13823 main.go:141] libmachine: (addons-959832)   </devices>
	I0906 18:29:31.152148   13823 main.go:141] libmachine: (addons-959832) </domain>
	I0906 18:29:31.152161   13823 main.go:141] libmachine: (addons-959832) 
	I0906 18:29:31.158081   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:b5:f5:6a in network default
	I0906 18:29:31.158542   13823 main.go:141] libmachine: (addons-959832) Ensuring networks are active...
	I0906 18:29:31.158562   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:31.159097   13823 main.go:141] libmachine: (addons-959832) Ensuring network default is active
	I0906 18:29:31.159345   13823 main.go:141] libmachine: (addons-959832) Ensuring network mk-addons-959832 is active
	I0906 18:29:31.159767   13823 main.go:141] libmachine: (addons-959832) Getting domain xml...
	I0906 18:29:31.160314   13823 main.go:141] libmachine: (addons-959832) Creating domain...
	I0906 18:29:32.546282   13823 main.go:141] libmachine: (addons-959832) Waiting to get IP...
	I0906 18:29:32.547051   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:32.547580   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:32.547618   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:32.547518   13845 retry.go:31] will retry after 234.819193ms: waiting for machine to come up
	I0906 18:29:32.783988   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:32.784398   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:32.784420   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:32.784350   13845 retry.go:31] will retry after 374.097016ms: waiting for machine to come up
	I0906 18:29:33.159641   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:33.160076   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:33.160104   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:33.160024   13845 retry.go:31] will retry after 398.438198ms: waiting for machine to come up
	I0906 18:29:33.559453   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:33.559850   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:33.559879   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:33.559800   13845 retry.go:31] will retry after 513.667683ms: waiting for machine to come up
	I0906 18:29:34.075531   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:34.075976   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:34.076002   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:34.075937   13845 retry.go:31] will retry after 542.640322ms: waiting for machine to come up
	I0906 18:29:34.620767   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:34.621139   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:34.621164   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:34.621100   13845 retry.go:31] will retry after 952.553494ms: waiting for machine to come up
	I0906 18:29:35.575061   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:35.575519   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:35.575550   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:35.575475   13845 retry.go:31] will retry after 761.897484ms: waiting for machine to come up
	I0906 18:29:36.339380   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:36.339747   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:36.339775   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:36.339696   13845 retry.go:31] will retry after 1.058974587s: waiting for machine to come up
	I0906 18:29:37.399861   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:37.400184   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:37.400204   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:37.400146   13845 retry.go:31] will retry after 1.319275872s: waiting for machine to come up
	I0906 18:29:38.720600   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:38.721039   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:38.721065   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:38.720974   13845 retry.go:31] will retry after 1.544734383s: waiting for machine to come up
	I0906 18:29:40.267964   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:40.268338   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:40.268365   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:40.268303   13845 retry.go:31] will retry after 2.517498837s: waiting for machine to come up
	I0906 18:29:42.790192   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:42.790620   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:42.790646   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:42.790574   13845 retry.go:31] will retry after 2.829630462s: waiting for machine to come up
	I0906 18:29:45.621992   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:45.622542   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:45.622614   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:45.622535   13845 retry.go:31] will retry after 3.555249592s: waiting for machine to come up
	I0906 18:29:49.181782   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:49.182176   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find current IP address of domain addons-959832 in network mk-addons-959832
	I0906 18:29:49.182199   13823 main.go:141] libmachine: (addons-959832) DBG | I0906 18:29:49.182134   13845 retry.go:31] will retry after 4.155059883s: waiting for machine to come up
	I0906 18:29:53.340058   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:53.340648   13823 main.go:141] libmachine: (addons-959832) Found IP for machine: 192.168.39.98
	I0906 18:29:53.340677   13823 main.go:141] libmachine: (addons-959832) Reserving static IP address...
	I0906 18:29:53.340693   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has current primary IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:53.341097   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find host DHCP lease matching {name: "addons-959832", mac: "52:54:00:c2:2d:3d", ip: "192.168.39.98"} in network mk-addons-959832
	I0906 18:29:53.410890   13823 main.go:141] libmachine: (addons-959832) DBG | Getting to WaitForSSH function...
	I0906 18:29:53.410935   13823 main.go:141] libmachine: (addons-959832) Reserved static IP address: 192.168.39.98
	I0906 18:29:53.410957   13823 main.go:141] libmachine: (addons-959832) Waiting for SSH to be available...
	I0906 18:29:53.413061   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:53.413353   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832
	I0906 18:29:53.413381   13823 main.go:141] libmachine: (addons-959832) DBG | unable to find defined IP address of network mk-addons-959832 interface with MAC address 52:54:00:c2:2d:3d
	I0906 18:29:53.413528   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH client type: external
	I0906 18:29:53.413551   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa (-rw-------)
	I0906 18:29:53.413582   13823 main.go:141] libmachine: (addons-959832) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:29:53.413596   13823 main.go:141] libmachine: (addons-959832) DBG | About to run SSH command:
	I0906 18:29:53.413610   13823 main.go:141] libmachine: (addons-959832) DBG | exit 0
	I0906 18:29:53.424764   13823 main.go:141] libmachine: (addons-959832) DBG | SSH cmd err, output: exit status 255: 
	I0906 18:29:53.424790   13823 main.go:141] libmachine: (addons-959832) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0906 18:29:53.424803   13823 main.go:141] libmachine: (addons-959832) DBG | command : exit 0
	I0906 18:29:53.424811   13823 main.go:141] libmachine: (addons-959832) DBG | err     : exit status 255
	I0906 18:29:53.424834   13823 main.go:141] libmachine: (addons-959832) DBG | output  : 
	I0906 18:29:56.425071   13823 main.go:141] libmachine: (addons-959832) DBG | Getting to WaitForSSH function...
	I0906 18:29:56.427965   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.428313   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.428337   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.428498   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH client type: external
	I0906 18:29:56.428529   13823 main.go:141] libmachine: (addons-959832) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa (-rw-------)
	I0906 18:29:56.428584   13823 main.go:141] libmachine: (addons-959832) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:29:56.428611   13823 main.go:141] libmachine: (addons-959832) DBG | About to run SSH command:
	I0906 18:29:56.428625   13823 main.go:141] libmachine: (addons-959832) DBG | exit 0
	I0906 18:29:56.557151   13823 main.go:141] libmachine: (addons-959832) DBG | SSH cmd err, output: <nil>: 
	I0906 18:29:56.557379   13823 main.go:141] libmachine: (addons-959832) KVM machine creation complete!
	I0906 18:29:56.557702   13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
	I0906 18:29:56.558229   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:56.558444   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:56.558623   13823 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:29:56.558641   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:29:56.559843   13823 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:29:56.559860   13823 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:29:56.559867   13823 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:29:56.559876   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.562179   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.562551   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.562587   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.562760   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.562922   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.563071   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.563184   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.563323   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.563491   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.563501   13823 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:29:56.672324   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:29:56.672345   13823 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:29:56.672355   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.675030   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.675361   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.675396   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.675587   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.675810   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.675962   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.676117   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.676285   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.676485   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.676498   13823 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:29:56.789500   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:29:56.789599   13823 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:29:56.789615   13823 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:29:56.789627   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:56.789887   13823 buildroot.go:166] provisioning hostname "addons-959832"
	I0906 18:29:56.789910   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:56.790145   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.792479   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.792813   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.792840   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.792964   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.793128   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.793278   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.793413   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.793564   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.793755   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.793770   13823 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-959832 && echo "addons-959832" | sudo tee /etc/hostname
	I0906 18:29:56.923171   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-959832
	
	I0906 18:29:56.923196   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:56.925829   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.926137   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:56.926165   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:56.926301   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:56.926516   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.926688   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:56.926855   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:56.927018   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:56.927167   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:56.927182   13823 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-959832' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-959832/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-959832' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:29:57.047682   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:29:57.047717   13823 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 18:29:57.047760   13823 buildroot.go:174] setting up certificates
	I0906 18:29:57.047779   13823 provision.go:84] configureAuth start
	I0906 18:29:57.047796   13823 main.go:141] libmachine: (addons-959832) Calling .GetMachineName
	I0906 18:29:57.048060   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:57.050451   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.050790   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.050828   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.050983   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.053241   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.053584   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.053615   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.053778   13823 provision.go:143] copyHostCerts
	I0906 18:29:57.053849   13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 18:29:57.054015   13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 18:29:57.054086   13823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 18:29:57.054144   13823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.addons-959832 san=[127.0.0.1 192.168.39.98 addons-959832 localhost minikube]
	I0906 18:29:57.192700   13823 provision.go:177] copyRemoteCerts
	I0906 18:29:57.192756   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:29:57.192779   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.195474   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.195742   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.195770   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.195927   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.196116   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.196268   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.196488   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.284813   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:29:57.312554   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 18:29:57.338356   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:29:57.363612   13823 provision.go:87] duration metric: took 315.815529ms to configureAuth
	I0906 18:29:57.363640   13823 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:29:57.363826   13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:29:57.363907   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.366452   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.366841   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.366868   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.367008   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.367195   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.367349   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.367475   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.367620   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:57.367765   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:57.367779   13823 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 18:29:57.603163   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 18:29:57.603188   13823 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:29:57.603196   13823 main.go:141] libmachine: (addons-959832) Calling .GetURL
	I0906 18:29:57.604560   13823 main.go:141] libmachine: (addons-959832) DBG | Using libvirt version 6000000
	I0906 18:29:57.606895   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.607175   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.607201   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.607398   13823 main.go:141] libmachine: Docker is up and running!
	I0906 18:29:57.607413   13823 main.go:141] libmachine: Reticulating splines...
	I0906 18:29:57.607421   13823 client.go:171] duration metric: took 27.082788539s to LocalClient.Create
	I0906 18:29:57.607447   13823 start.go:167] duration metric: took 27.082857245s to libmachine.API.Create "addons-959832"
	I0906 18:29:57.607462   13823 start.go:293] postStartSetup for "addons-959832" (driver="kvm2")
	I0906 18:29:57.607488   13823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:29:57.607514   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.607782   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:29:57.607801   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.609814   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.610081   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.610134   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.610226   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.610417   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.610608   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.610769   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.695798   13823 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:29:57.700464   13823 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:29:57.700493   13823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 18:29:57.700596   13823 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 18:29:57.700630   13823 start.go:296] duration metric: took 93.15804ms for postStartSetup
	I0906 18:29:57.700663   13823 main.go:141] libmachine: (addons-959832) Calling .GetConfigRaw
	I0906 18:29:57.701257   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:57.704196   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.704554   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.704585   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.704877   13823 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/config.json ...
	I0906 18:29:57.705072   13823 start.go:128] duration metric: took 27.1982419s to createHost
	I0906 18:29:57.705098   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.707499   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.707842   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.707862   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.708035   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.708256   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.708433   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.708569   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.708760   13823 main.go:141] libmachine: Using SSH client type: native
	I0906 18:29:57.708991   13823 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.98 22 <nil> <nil>}
	I0906 18:29:57.709005   13823 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:29:57.821756   13823 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725647397.800291454
	
	I0906 18:29:57.821779   13823 fix.go:216] guest clock: 1725647397.800291454
	I0906 18:29:57.821789   13823 fix.go:229] Guest: 2024-09-06 18:29:57.800291454 +0000 UTC Remote: 2024-09-06 18:29:57.705083739 +0000 UTC m=+27.297090225 (delta=95.207715ms)
	I0906 18:29:57.821840   13823 fix.go:200] guest clock delta is within tolerance: 95.207715ms
	I0906 18:29:57.821853   13823 start.go:83] releasing machines lock for "addons-959832", held for 27.315095887s
	I0906 18:29:57.821881   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.822185   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:57.824591   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.824964   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.824991   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.825103   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.825621   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.825837   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:29:57.825955   13823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:29:57.825998   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.826048   13823 ssh_runner.go:195] Run: cat /version.json
	I0906 18:29:57.826075   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:29:57.828396   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.828722   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.828752   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.828771   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.828910   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.829111   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.829201   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:57.829221   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:57.829287   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.829450   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.829463   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:29:57.829621   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:29:57.829749   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:29:57.829859   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:29:57.948786   13823 ssh_runner.go:195] Run: systemctl --version
	I0906 18:29:57.955191   13823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 18:29:58.113311   13823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:29:58.119769   13823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:29:58.119846   13823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:29:58.135762   13823 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:29:58.135789   13823 start.go:495] detecting cgroup driver to use...
	I0906 18:29:58.135859   13823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:29:58.151729   13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:29:58.166404   13823 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:29:58.166473   13823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:29:58.180954   13823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:29:58.195119   13823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:29:58.315328   13823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:29:58.467302   13823 docker.go:233] disabling docker service ...
	I0906 18:29:58.467362   13823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:29:58.482228   13823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:29:58.495471   13823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:29:58.606896   13823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:29:58.717897   13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:29:58.732638   13823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:29:58.751394   13823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 18:29:58.751461   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.762265   13823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 18:29:58.762343   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.772625   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.783002   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.793237   13823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:29:58.804024   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.814731   13823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.832054   13823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:29:58.842905   13823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:29:58.852537   13823 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 18:29:58.852595   13823 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 18:29:58.866354   13823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:29:58.877194   13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:29:59.004604   13823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 18:29:59.101439   13823 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 18:29:59.101538   13823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 18:29:59.106286   13823 start.go:563] Will wait 60s for crictl version
	I0906 18:29:59.106358   13823 ssh_runner.go:195] Run: which crictl
	I0906 18:29:59.110304   13823 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:29:59.148807   13823 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 18:29:59.148953   13823 ssh_runner.go:195] Run: crio --version
	I0906 18:29:59.178394   13823 ssh_runner.go:195] Run: crio --version
	I0906 18:29:59.210051   13823 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 18:29:59.211504   13823 main.go:141] libmachine: (addons-959832) Calling .GetIP
	I0906 18:29:59.214173   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:59.214515   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:29:59.214548   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:29:59.214703   13823 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:29:59.218969   13823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:29:59.231960   13823 kubeadm.go:883] updating cluster {Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 18:29:59.232084   13823 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:29:59.232129   13823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:29:59.263727   13823 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 18:29:59.263807   13823 ssh_runner.go:195] Run: which lz4
	I0906 18:29:59.267901   13823 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 18:29:59.271879   13823 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 18:29:59.271906   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 18:30:00.584417   13823 crio.go:462] duration metric: took 1.316553716s to copy over tarball
	I0906 18:30:00.584486   13823 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 18:30:02.812933   13823 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.228424681s)
	I0906 18:30:02.812968   13823 crio.go:469] duration metric: took 2.22852468s to extract the tarball
	I0906 18:30:02.812978   13823 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 18:30:02.850138   13823 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:30:02.893341   13823 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 18:30:02.893365   13823 cache_images.go:84] Images are preloaded, skipping loading
	I0906 18:30:02.893375   13823 kubeadm.go:934] updating node { 192.168.39.98 8443 v1.31.0 crio true true} ...
	I0906 18:30:02.893497   13823 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-959832 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:30:02.893579   13823 ssh_runner.go:195] Run: crio config
	I0906 18:30:02.943751   13823 cni.go:84] Creating CNI manager for ""
	I0906 18:30:02.943774   13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:30:02.943794   13823 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 18:30:02.943823   13823 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.98 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-959832 NodeName:addons-959832 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 18:30:02.943970   13823 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-959832"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 18:30:02.944029   13823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:30:02.953978   13823 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 18:30:02.954045   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 18:30:02.963215   13823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 18:30:02.979953   13823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:30:02.996152   13823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0906 18:30:03.012715   13823 ssh_runner.go:195] Run: grep 192.168.39.98	control-plane.minikube.internal$ /etc/hosts
	I0906 18:30:03.016576   13823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:30:03.028370   13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:03.151085   13823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:03.168582   13823 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832 for IP: 192.168.39.98
	I0906 18:30:03.168607   13823 certs.go:194] generating shared ca certs ...
	I0906 18:30:03.168628   13823 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.168788   13823 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 18:30:03.299866   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt ...
	I0906 18:30:03.299897   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt: {Name:mke2b7c471d9f59e720011f7b10016af11ee9297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.300069   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key ...
	I0906 18:30:03.300084   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key: {Name:mkfac70472d4bba2ebe5c985be8bd475bcc6f548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.300181   13823 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 18:30:03.425280   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt ...
	I0906 18:30:03.425310   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt: {Name:mk08fa1d396d35f7ec100676e804094098a4d70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.425492   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key ...
	I0906 18:30:03.425520   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key: {Name:mk8fe87021c9d97780410b17544e3c226973cd76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.425623   13823 certs.go:256] generating profile certs ...
	I0906 18:30:03.425675   13823 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key
	I0906 18:30:03.425689   13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt with IP's: []
	I0906 18:30:03.659418   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt ...
	I0906 18:30:03.659450   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: {Name:mk0f9c2f503201837abe2d4909970e9be7ff24f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.659616   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key ...
	I0906 18:30:03.659626   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.key: {Name:mkdc65ba0a6775a2f0eae4f7b7974195d86c87d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.659695   13823 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e
	I0906 18:30:03.659712   13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.98]
	I0906 18:30:03.747012   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e ...
	I0906 18:30:03.747038   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e: {Name:mkac8ea9fd65a4ebd10dcac540165d914ce7db8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.747178   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e ...
	I0906 18:30:03.747192   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e: {Name:mk4a1ef0165a60b29c7ae52805cfb6305e8fcd01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.747259   13823 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt.2d667b7e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt
	I0906 18:30:03.747327   13823 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key.2d667b7e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key
	I0906 18:30:03.747377   13823 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key
	I0906 18:30:03.747394   13823 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt with IP's: []
	I0906 18:30:03.959127   13823 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt ...
	I0906 18:30:03.959155   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt: {Name:mkde7bd5ab135e6d5e9a29c7a353c7a7ff8f667c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.959314   13823 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key ...
	I0906 18:30:03.959329   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key: {Name:mkaff3d579d60be2767a53917ba5e3ae0b22c412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:03.959489   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:30:03.959520   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:30:03.959543   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:30:03.959565   13823 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 18:30:03.960109   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:30:03.987472   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 18:30:04.010859   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:30:04.045335   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:30:04.069442   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 18:30:04.096260   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 18:30:04.121182   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:30:04.149817   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:30:04.173890   13823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:30:04.197498   13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 18:30:04.216950   13823 ssh_runner.go:195] Run: openssl version
	I0906 18:30:04.222654   13823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:30:04.233330   13823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:04.237701   13823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:04.237760   13823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:04.243532   13823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:30:04.256013   13823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:30:04.260734   13823 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:30:04.260787   13823 kubeadm.go:392] StartCluster: {Name:addons-959832 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-959832 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:30:04.260898   13823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 18:30:04.260952   13823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 18:30:04.303067   13823 cri.go:89] found id: ""
	I0906 18:30:04.303126   13823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 18:30:04.313281   13823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 18:30:04.324983   13823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 18:30:04.335214   13823 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 18:30:04.335235   13823 kubeadm.go:157] found existing configuration files:
	
	I0906 18:30:04.335277   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 18:30:04.344648   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 18:30:04.344695   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 18:30:04.354421   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 18:30:04.363814   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 18:30:04.363883   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 18:30:04.373191   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 18:30:04.382426   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 18:30:04.382489   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 18:30:04.392389   13823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 18:30:04.402110   13823 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 18:30:04.402181   13823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 18:30:04.411730   13823 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 18:30:04.463645   13823 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 18:30:04.463694   13823 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 18:30:04.559431   13823 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 18:30:04.559574   13823 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 18:30:04.559691   13823 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 18:30:04.568785   13823 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 18:30:04.633550   13823 out.go:235]   - Generating certificates and keys ...
	I0906 18:30:04.633656   13823 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 18:30:04.633738   13823 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 18:30:04.850232   13823 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 18:30:05.028833   13823 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 18:30:05.198669   13823 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 18:30:05.265171   13823 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 18:30:05.396138   13823 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 18:30:05.396314   13823 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-959832 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
	I0906 18:30:05.615454   13823 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 18:30:05.615825   13823 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-959832 localhost] and IPs [192.168.39.98 127.0.0.1 ::1]
	I0906 18:30:05.699300   13823 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 18:30:05.879000   13823 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 18:30:05.979662   13823 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 18:30:05.979866   13823 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 18:30:06.143465   13823 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 18:30:06.399160   13823 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 18:30:06.612959   13823 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 18:30:06.801192   13823 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 18:30:06.957635   13823 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 18:30:06.958075   13823 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 18:30:06.960513   13823 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 18:30:06.962637   13823 out.go:235]   - Booting up control plane ...
	I0906 18:30:06.962755   13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 18:30:06.962853   13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 18:30:06.962936   13823 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 18:30:06.982006   13823 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 18:30:06.987635   13823 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 18:30:06.987741   13823 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 18:30:07.107392   13823 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 18:30:07.107507   13823 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 18:30:07.608684   13823 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.950467ms
	I0906 18:30:07.608794   13823 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 18:30:12.608494   13823 kubeadm.go:310] [api-check] The API server is healthy after 5.001776937s
	I0906 18:30:12.627560   13823 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 18:30:12.653476   13823 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 18:30:12.689334   13823 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 18:30:12.689602   13823 kubeadm.go:310] [mark-control-plane] Marking the node addons-959832 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 18:30:12.704990   13823 kubeadm.go:310] [bootstrap-token] Using token: ithoaf.u83bc4nltc0uwhpo
	I0906 18:30:12.706456   13823 out.go:235]   - Configuring RBAC rules ...
	I0906 18:30:12.706574   13823 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 18:30:12.717372   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 18:30:12.735384   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 18:30:12.742188   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 18:30:12.748903   13823 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 18:30:12.753193   13823 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 18:30:13.018036   13823 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 18:30:13.440120   13823 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 18:30:14.029827   13823 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 18:30:14.029853   13823 kubeadm.go:310] 
	I0906 18:30:14.029954   13823 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 18:30:14.029981   13823 kubeadm.go:310] 
	I0906 18:30:14.030093   13823 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 18:30:14.030104   13823 kubeadm.go:310] 
	I0906 18:30:14.030140   13823 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 18:30:14.030226   13823 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 18:30:14.030309   13823 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 18:30:14.030318   13823 kubeadm.go:310] 
	I0906 18:30:14.030403   13823 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 18:30:14.030428   13823 kubeadm.go:310] 
	I0906 18:30:14.030488   13823 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 18:30:14.030498   13823 kubeadm.go:310] 
	I0906 18:30:14.030561   13823 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 18:30:14.030660   13823 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 18:30:14.030776   13823 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 18:30:14.030796   13823 kubeadm.go:310] 
	I0906 18:30:14.030915   13823 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 18:30:14.031015   13823 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 18:30:14.031028   13823 kubeadm.go:310] 
	I0906 18:30:14.031132   13823 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ithoaf.u83bc4nltc0uwhpo \
	I0906 18:30:14.031273   13823 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 18:30:14.031306   13823 kubeadm.go:310] 	--control-plane 
	I0906 18:30:14.031316   13823 kubeadm.go:310] 
	I0906 18:30:14.031450   13823 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 18:30:14.031472   13823 kubeadm.go:310] 
	I0906 18:30:14.031592   13823 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ithoaf.u83bc4nltc0uwhpo \
	I0906 18:30:14.031750   13823 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 18:30:14.032620   13823 kubeadm.go:310] W0906 18:30:04.444733     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:14.033044   13823 kubeadm.go:310] W0906 18:30:04.446560     823 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:14.033225   13823 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 18:30:14.033247   13823 cni.go:84] Creating CNI manager for ""
	I0906 18:30:14.033257   13823 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:30:14.035685   13823 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 18:30:14.037043   13823 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 18:30:14.051040   13823 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 18:30:14.080330   13823 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 18:30:14.080403   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:14.080418   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-959832 minikube.k8s.io/updated_at=2024_09_06T18_30_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-959832 minikube.k8s.io/primary=true
	I0906 18:30:14.123199   13823 ops.go:34] apiserver oom_adj: -16
	I0906 18:30:14.247505   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:14.748250   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:15.248440   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:15.747562   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:16.247913   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:16.747636   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.248181   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.748128   13823 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:17.838400   13823 kubeadm.go:1113] duration metric: took 3.758062138s to wait for elevateKubeSystemPrivileges
	I0906 18:30:17.838441   13823 kubeadm.go:394] duration metric: took 13.577657427s to StartCluster
	I0906 18:30:17.838464   13823 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:17.838613   13823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:30:17.839096   13823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:17.839337   13823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 18:30:17.839344   13823 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.98 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:30:17.839425   13823 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 18:30:17.839549   13823 addons.go:69] Setting yakd=true in profile "addons-959832"
	I0906 18:30:17.839564   13823 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-959832"
	I0906 18:30:17.839564   13823 addons.go:69] Setting helm-tiller=true in profile "addons-959832"
	I0906 18:30:17.839600   13823 addons.go:69] Setting storage-provisioner=true in profile "addons-959832"
	I0906 18:30:17.839601   13823 addons.go:69] Setting inspektor-gadget=true in profile "addons-959832"
	I0906 18:30:17.839616   13823 addons.go:234] Setting addon storage-provisioner=true in "addons-959832"
	I0906 18:30:17.839621   13823 addons.go:234] Setting addon inspektor-gadget=true in "addons-959832"
	I0906 18:30:17.839625   13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:30:17.839635   13823 addons.go:234] Setting addon helm-tiller=true in "addons-959832"
	I0906 18:30:17.839624   13823 addons.go:69] Setting ingress-dns=true in profile "addons-959832"
	I0906 18:30:17.839656   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839680   13823 addons.go:234] Setting addon ingress-dns=true in "addons-959832"
	I0906 18:30:17.839708   13823 addons.go:69] Setting metrics-server=true in profile "addons-959832"
	I0906 18:30:17.839721   13823 addons.go:69] Setting gcp-auth=true in profile "addons-959832"
	I0906 18:30:17.839706   13823 addons.go:69] Setting ingress=true in profile "addons-959832"
	I0906 18:30:17.839737   13823 addons.go:234] Setting addon metrics-server=true in "addons-959832"
	I0906 18:30:17.839738   13823 mustload.go:65] Loading cluster: addons-959832
	I0906 18:30:17.839744   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839683   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839951   13823 config.go:182] Loaded profile config "addons-959832": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:30:17.840149   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840201   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.840215   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840233   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.839763   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.840319   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840341   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.840156   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.839590   13823 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-959832"
	I0906 18:30:17.840465   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.840490   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839591   13823 addons.go:69] Setting registry=true in profile "addons-959832"
	I0906 18:30:17.840596   13823 addons.go:234] Setting addon registry=true in "addons-959832"
	I0906 18:30:17.840637   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.840665   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.840688   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.841280   13823 out.go:177] * Verifying Kubernetes components...
	I0906 18:30:17.839582   13823 addons.go:234] Setting addon yakd=true in "addons-959832"
	I0906 18:30:17.841416   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.839685   13823 addons.go:69] Setting volcano=true in profile "addons-959832"
	I0906 18:30:17.841566   13823 addons.go:234] Setting addon volcano=true in "addons-959832"
	I0906 18:30:17.839689   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.841626   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.841783   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.841812   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.841859   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.839695   13823 addons.go:69] Setting cloud-spanner=true in profile "addons-959832"
	I0906 18:30:17.841931   13823 addons.go:234] Setting addon cloud-spanner=true in "addons-959832"
	I0906 18:30:17.841963   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.841970   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.841989   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.841816   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.842303   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.842321   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.842543   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.842595   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.839696   13823 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-959832"
	I0906 18:30:17.842884   13823 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-959832"
	I0906 18:30:17.839699   13823 addons.go:69] Setting volumesnapshots=true in profile "addons-959832"
	I0906 18:30:17.839713   13823 addons.go:69] Setting default-storageclass=true in profile "addons-959832"
	I0906 18:30:17.839762   13823 addons.go:234] Setting addon ingress=true in "addons-959832"
	I0906 18:30:17.842835   13823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:17.839705   13823 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-959832"
	I0906 18:30:17.843210   13823 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-959832"
	I0906 18:30:17.843351   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.843531   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.843563   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.843835   13823 addons.go:234] Setting addon volumesnapshots=true in "addons-959832"
	I0906 18:30:17.843857   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.844006   13823 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-959832"
	I0906 18:30:17.844352   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.844369   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.853075   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.861521   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42115
	I0906 18:30:17.862212   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.862927   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.862953   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.863254   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
	I0906 18:30:17.863342   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.863358   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0906 18:30:17.864034   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.864195   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.864234   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.864508   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.864529   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.864924   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.868974   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.869351   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.869398   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.869553   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.869575   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.879527   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39157
	I0906 18:30:17.879542   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0906 18:30:17.879654   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0906 18:30:17.879684   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0906 18:30:17.879760   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.881648   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.885011   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.885160   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I0906 18:30:17.885420   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.885459   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.885971   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.886011   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.886343   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.886375   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.886602   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.886665   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.886686   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.886716   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.886809   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45243
	I0906 18:30:17.886904   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
	I0906 18:30:17.887101   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887199   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887215   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887238   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.887599   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.888208   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.888371   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888383   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888541   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888561   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888566   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.888701   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888711   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888743   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.888754   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.888780   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.889687   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.889730   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.889761   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.889889   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.889901   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.889943   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.889978   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.890062   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.890069   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.890553   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.890607   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.891323   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.891899   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.891930   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.892658   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.892934   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.893002   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.893143   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.893184   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.893806   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.893854   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.894913   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.894960   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.895352   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.895805   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.895847   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.897573   13823 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0906 18:30:17.899434   13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0906 18:30:17.899459   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0906 18:30:17.899481   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.903071   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.903469   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.903516   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.903739   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.903926   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.904048   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.904161   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.911366   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
	I0906 18:30:17.912019   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.912706   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.912741   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.913185   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.913911   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.913970   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.916304   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0906 18:30:17.916921   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.917609   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.917631   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.918094   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.918809   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.918849   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.920068   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34889
	I0906 18:30:17.920527   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.921055   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.921080   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.921442   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.921621   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.923561   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.924047   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0906 18:30:17.924598   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.925400   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.925427   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.925816   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.925833   13823 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 18:30:17.926025   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.927332   13823 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 18:30:17.927362   13823 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 18:30:17.927413   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.928541   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.931169   13823 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 18:30:17.932027   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.932560   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.932588   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.932970   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0906 18:30:17.933032   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 18:30:17.933049   13823 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 18:30:17.933073   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.933158   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.933325   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.933426   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.933566   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.934213   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.934915   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.934933   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.935404   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.935557   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0906 18:30:17.935722   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.936009   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.936810   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42513
	I0906 18:30:17.937524   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.938126   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.938143   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.938211   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.938388   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.938402   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.938499   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.938891   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.938931   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.938946   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.938969   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.939155   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.939625   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.939703   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.939744   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.939784   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.939923   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.940763   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.941678   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.943308   13823 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0906 18:30:17.943311   13823 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 18:30:17.944079   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41849
	I0906 18:30:17.944771   13823 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:17.944801   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 18:30:17.944819   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.944775   13823 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:17.944907   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 18:30:17.944920   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.948201   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.948657   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.948689   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.948842   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.949234   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.949990   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.950029   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.950282   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.950943   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42051
	I0906 18:30:17.950969   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.950989   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.951044   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
	I0906 18:30:17.951238   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.951466   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.951515   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.951465   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.952056   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.952066   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.952073   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.952082   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.952138   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.952155   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.952344   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.952631   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.952687   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.952826   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.952846   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.953106   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.953314   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.953375   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0906 18:30:17.953914   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.953936   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.954109   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.954862   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0906 18:30:17.955016   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.955377   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.955393   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.955452   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.955793   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.955962   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.955973   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0906 18:30:17.956660   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.956816   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.956830   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.957324   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.957345   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.957414   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.957813   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.957859   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.958442   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.958480   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.959016   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.960122   13823 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-959832"
	I0906 18:30:17.960157   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.960504   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.960508   13823 addons.go:234] Setting addon default-storageclass=true in "addons-959832"
	I0906 18:30:17.960533   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.960553   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:17.960773   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:17.960927   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.960957   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.961028   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0906 18:30:17.963299   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0906 18:30:17.963616   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.964149   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.964171   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.964676   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.964848   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.965817   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:17.966420   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.967088   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0906 18:30:17.967322   13823 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:17.967345   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 18:30:17.967363   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.967560   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.968670   13823 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 18:30:17.969763   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.969781   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.970095   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 18:30:17.970112   13823 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 18:30:17.970131   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.970337   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35251
	I0906 18:30:17.970743   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.971382   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.971385   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.971412   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.972059   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.972078   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.972319   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.972519   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.972712   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.972912   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.973203   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.974390   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.974410   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.975147   13823 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 18:30:17.975803   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.976343   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.976370   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.976539   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.976705   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.976816   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.976940   13823 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:17.976955   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 18:30:17.976970   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.977663   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.978180   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.978553   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.980971   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.981520   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.981539   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.981727   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.981897   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.982079   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.982239   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.983455   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44503
	I0906 18:30:17.983619   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0906 18:30:17.984075   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.984656   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.984672   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.984763   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I0906 18:30:17.984898   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.985019   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.985969   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.985992   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.986044   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.986161   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.986175   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.986855   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.986875   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.987256   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.987509   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.988050   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.988397   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 18:30:17.988950   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41157
	I0906 18:30:17.989105   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.989288   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.989355   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.989528   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.989938   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.989956   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.990021   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:17.990028   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:17.990027   13823 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 18:30:17.990240   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:17.990252   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:17.990260   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:17.990268   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:17.990348   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.990523   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:17.990554   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:17.990563   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 18:30:17.990634   13823 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0906 18:30:17.990673   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:17.990882   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.991485   13823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:17.991505   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 18:30:17.991523   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.992446   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 18:30:17.992494   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 18:30:17.992990   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34117
	I0906 18:30:17.993671   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:17.994204   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 18:30:17.994221   13823 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 18:30:17.994276   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:17.994304   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:17.994314   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:17.994319   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:17.994675   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.994705   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:17.995095   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.995127   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.995287   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.995320   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 18:30:17.995468   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.995609   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:17.995687   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:17.995715   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:17.995789   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:17.996063   13823 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 18:30:17.997430   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.997701   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 18:30:17.997900   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:17.997927   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:17.998085   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:17.998251   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:17.999429   13823 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 18:30:18.000423   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33437
	I0906 18:30:18.000443   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.000610   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 18:30:18.000700   13823 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 18:30:18.000713   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 18:30:18.000733   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.000992   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.001111   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:18.001653   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:18.001671   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:18.002038   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:18.002683   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:18.002727   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:18.003368   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 18:30:18.003618   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.003952   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.003970   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.004139   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.004273   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.004359   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.004434   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.005728   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 18:30:18.006862   13823 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 18:30:18.007852   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 18:30:18.007870   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 18:30:18.007888   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.010752   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.011133   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.011162   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.011278   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.011435   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.011556   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.011677   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.019869   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0906 18:30:18.025324   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:18.025853   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:18.025867   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	W0906 18:30:18.026199   13823 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37452->192.168.39.98:22: read: connection reset by peer
	I0906 18:30:18.026228   13823 retry.go:31] will retry after 165.921545ms: ssh: handshake failed: read tcp 192.168.39.1:37452->192.168.39.98:22: read: connection reset by peer
	I0906 18:30:18.026287   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:18.026483   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:18.028221   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:18.028440   13823 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:18.028451   13823 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 18:30:18.028463   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.030594   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.030951   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.030970   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.031122   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.031278   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.031416   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.031526   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.046424   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I0906 18:30:18.046881   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:18.047847   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:18.047876   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:18.048219   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:18.048439   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:18.050153   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:18.052332   13823 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 18:30:18.054123   13823 out.go:177]   - Using image docker.io/busybox:stable
	I0906 18:30:18.055683   13823 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:18.055715   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 18:30:18.055735   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:18.058890   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.059267   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:18.059308   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:18.059467   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:18.059660   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:18.059835   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:18.059965   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:18.325758   13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0906 18:30:18.325780   13823 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0906 18:30:18.462745   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:18.498367   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:18.542161   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 18:30:18.542189   13823 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 18:30:18.544357   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 18:30:18.544383   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 18:30:18.562318   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:18.591769   13823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:18.592321   13823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 18:30:18.615892   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:18.619170   13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 18:30:18.619198   13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 18:30:18.623393   13823 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 18:30:18.623412   13823 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 18:30:18.632558   13823 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 18:30:18.632587   13823 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 18:30:18.642554   13823 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:18.642577   13823 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0906 18:30:18.646434   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:18.712949   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:18.744354   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 18:30:18.744376   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 18:30:18.745893   13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 18:30:18.745909   13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 18:30:18.758057   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:18.794329   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 18:30:18.794351   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 18:30:18.810523   13823 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:18.810541   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 18:30:18.819725   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0906 18:30:18.820412   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 18:30:18.820430   13823 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 18:30:18.870635   13823 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 18:30:18.870657   13823 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 18:30:18.955167   13823 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 18:30:18.955193   13823 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 18:30:19.024347   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 18:30:19.024371   13823 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 18:30:19.036090   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 18:30:19.036117   13823 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 18:30:19.061575   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 18:30:19.061599   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 18:30:19.063347   13823 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 18:30:19.063362   13823 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 18:30:19.071318   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:19.185778   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 18:30:19.185801   13823 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 18:30:19.198921   13823 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:19.198940   13823 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 18:30:19.225401   13823 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:19.225422   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 18:30:19.250965   13823 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 18:30:19.250991   13823 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 18:30:19.295032   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 18:30:19.295064   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 18:30:19.560881   13823 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:19.560903   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 18:30:19.605732   13823 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 18:30:19.605761   13823 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 18:30:19.605857   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:19.639600   13823 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 18:30:19.639626   13823 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 18:30:19.651766   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:19.815029   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:19.831850   13823 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 18:30:19.831883   13823 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 18:30:19.953978   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 18:30:19.953997   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 18:30:20.091151   13823 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:20.091171   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 18:30:20.208365   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 18:30:20.208395   13823 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 18:30:20.322907   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:20.592180   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 18:30:20.592203   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 18:30:20.866215   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 18:30:20.866237   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 18:30:21.296320   13823 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:21.296345   13823 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 18:30:21.533570   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:23.237459   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.774672195s)
	I0906 18:30:23.237524   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.237547   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.237911   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.237986   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.238006   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.238024   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.238036   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.238294   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.238313   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.751842   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.253438201s)
	I0906 18:30:23.751900   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.751914   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.751912   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.18956267s)
	I0906 18:30:23.751952   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.751967   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752014   13823 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.160216467s)
	I0906 18:30:23.752042   13823 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.159701916s)
	I0906 18:30:23.752057   13823 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 18:30:23.752091   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.136171256s)
	I0906 18:30:23.752131   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752144   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752372   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752387   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.752396   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752402   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752419   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752432   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.752442   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752445   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752450   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752518   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752555   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752587   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.752603   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.752619   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.752674   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752715   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.752737   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.752746   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.753079   13823 node_ready.go:35] waiting up to 6m0s for node "addons-959832" to be "Ready" ...
	I0906 18:30:23.753223   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.753238   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.753335   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.753364   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.753380   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.817790   13823 node_ready.go:49] node "addons-959832" has status "Ready":"True"
	I0906 18:30:23.817814   13823 node_ready.go:38] duration metric: took 64.714897ms for node "addons-959832" to be "Ready" ...
	I0906 18:30:23.817823   13823 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:23.864694   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.864718   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.864768   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:23.864803   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:23.865089   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.865109   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:23.865155   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:23.865189   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:23.865203   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	W0906 18:30:23.865293   13823 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0906 18:30:23.895688   13823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:24.386851   13823 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-959832" context rescaled to 1 replicas
	I0906 18:30:24.986957   13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 18:30:24.987010   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:24.990148   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:24.990559   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:24.990592   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:24.990724   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:24.990958   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:24.991131   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:24.991298   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:25.501366   13823 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 18:30:25.593869   13823 addons.go:234] Setting addon gcp-auth=true in "addons-959832"
	I0906 18:30:25.593929   13823 host.go:66] Checking if "addons-959832" exists ...
	I0906 18:30:25.594221   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:25.594261   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:25.609081   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0906 18:30:25.609512   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:25.609995   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:25.610010   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:25.610361   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:25.610997   13823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:30:25.611034   13823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:30:25.625831   13823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0906 18:30:25.626278   13823 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:30:25.626760   13823 main.go:141] libmachine: Using API Version  1
	I0906 18:30:25.626788   13823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:30:25.627170   13823 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:30:25.627386   13823 main.go:141] libmachine: (addons-959832) Calling .GetState
	I0906 18:30:25.629014   13823 main.go:141] libmachine: (addons-959832) Calling .DriverName
	I0906 18:30:25.629236   13823 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 18:30:25.629259   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHHostname
	I0906 18:30:25.631653   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:25.632049   13823 main.go:141] libmachine: (addons-959832) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c2:2d:3d", ip: ""} in network mk-addons-959832: {Iface:virbr1 ExpiryTime:2024-09-06 19:29:45 +0000 UTC Type:0 Mac:52:54:00:c2:2d:3d Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:addons-959832 Clientid:01:52:54:00:c2:2d:3d}
	I0906 18:30:25.632077   13823 main.go:141] libmachine: (addons-959832) DBG | domain addons-959832 has defined IP address 192.168.39.98 and MAC address 52:54:00:c2:2d:3d in network mk-addons-959832
	I0906 18:30:25.632216   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHPort
	I0906 18:30:25.632399   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHKeyPath
	I0906 18:30:25.632555   13823 main.go:141] libmachine: (addons-959832) Calling .GetSSHUsername
	I0906 18:30:25.632700   13823 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/addons-959832/id_rsa Username:docker}
	I0906 18:30:25.941079   13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:27.481753   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.835292795s)
	I0906 18:30:27.481764   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.768781047s)
	I0906 18:30:27.481804   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481809   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.723718351s)
	I0906 18:30:27.481827   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481815   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481841   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481846   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481854   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481864   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.662110283s)
	I0906 18:30:27.481888   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481903   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481917   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.410575966s)
	I0906 18:30:27.481932   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481941   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.481953   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.876072516s)
	I0906 18:30:27.481973   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.481985   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482084   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.830290669s)
	I0906 18:30:27.482101   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482111   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482256   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.667196336s)
	I0906 18:30:27.482281   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	W0906 18:30:27.482296   13823 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:27.482317   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482323   13823 retry.go:31] will retry after 254.362145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:27.482304   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482348   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482355   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482362   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482365   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482369   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482372   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482374   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482381   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482386   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482391   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482395   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482402   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482411   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482419   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482426   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482399   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.159419479s)
	I0906 18:30:27.482444   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482451   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482456   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482461   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482466   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.482475   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482891   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.482928   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.482936   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.482392   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.482433   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.484341   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.484358   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.484374   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.484397   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.484405   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.484413   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.484420   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.484462   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.484469   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.484477   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.484484   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.485863   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485876   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485887   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485896   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485904   13823 addons.go:475] Verifying addon metrics-server=true in "addons-959832"
	I0906 18:30:27.485927   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.485930   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485938   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485943   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.485946   13823 addons.go:475] Verifying addon ingress=true in "addons-959832"
	I0906 18:30:27.485950   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485997   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486046   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486077   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.486084   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.485864   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486513   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.486554   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.486562   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.487477   13823 out.go:177] * Verifying ingress addon...
	I0906 18:30:27.487573   13823 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-959832 service yakd-dashboard -n yakd-dashboard
	
	I0906 18:30:27.486024   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.487691   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.487717   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:27.487728   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:27.487937   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:27.487952   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:27.487960   13823 addons.go:475] Verifying addon registry=true in "addons-959832"
	I0906 18:30:27.487962   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:27.489109   13823 out.go:177] * Verifying registry addon...
	I0906 18:30:27.490025   13823 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 18:30:27.490703   13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 18:30:27.494994   13823 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 18:30:27.495014   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:27.495422   13823 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 18:30:27.495442   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:27.737115   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:27.995783   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:27.996316   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:28.405776   13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:28.525889   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:28.526140   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.000232   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:29.000400   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.288925   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.755298783s)
	I0906 18:30:29.288949   13823 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.659689548s)
	I0906 18:30:29.288969   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.288980   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.289345   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.289363   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.289373   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.289381   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.289348   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:29.289643   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.289659   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.289670   13823 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-959832"
	I0906 18:30:29.290527   13823 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:29.291464   13823 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 18:30:29.293133   13823 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 18:30:29.293804   13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 18:30:29.294483   13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 18:30:29.294501   13823 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 18:30:29.307557   13823 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 18:30:29.307575   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:29.501347   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:29.502636   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.549399   13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 18:30:29.549424   13823 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 18:30:29.631326   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.894156301s)
	I0906 18:30:29.631395   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.631409   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.631783   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.631805   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.631809   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:29.631815   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:29.631831   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:29.632053   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:29.632067   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:29.711353   13823 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:29.711373   13823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 18:30:29.758533   13823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:29.798367   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:29.994829   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:29.995464   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:30.298814   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:30.494755   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:30.495217   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:30.800377   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:30.927844   13823 pod_ready.go:103] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:31.011246   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:31.011996   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:31.259074   13823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.500495277s)
	I0906 18:30:31.259136   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:31.259150   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:31.259463   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:31.259567   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:31.259547   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:31.259579   13823 main.go:141] libmachine: Making call to close driver server
	I0906 18:30:31.259614   13823 main.go:141] libmachine: (addons-959832) Calling .Close
	I0906 18:30:31.259913   13823 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:30:31.259930   13823 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:30:31.259955   13823 main.go:141] libmachine: (addons-959832) DBG | Closing plugin on server side
	I0906 18:30:31.261909   13823 addons.go:475] Verifying addon gcp-auth=true in "addons-959832"
	I0906 18:30:31.263787   13823 out.go:177] * Verifying gcp-auth addon...
	I0906 18:30:31.265893   13823 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 18:30:31.298469   13823 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:30:31.298489   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:31.300480   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:31.497017   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:31.497257   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:31.769388   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:31.798048   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:31.995495   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:31.995656   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:32.269836   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:32.298842   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:32.495206   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:32.496478   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:32.769455   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:32.798535   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:32.905084   13823 pod_ready.go:98] pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.98 HostIPs:[{IP:192.168.39.
98}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-06 18:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:23 +0000 UTC,FinishedAt:2024-09-06 18:30:30 +0000 UTC,ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2 Started:0xc0020651d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000b9f530} {Name:kube-api-access-fjvjc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000b9f540}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0906 18:30:32.905113   13823 pod_ready.go:82] duration metric: took 9.009398679s for pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace to be "Ready" ...
	E0906 18:30:32.905127   13823 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-b4zlv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:32 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-06 18:30:18 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.98 HostIPs:[{IP:192.168.39.98}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-06 18:30:18 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-06 18:30:23 +0000 UTC,FinishedAt:2024-09-06 18:30:30 +0000 UTC,ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://f4bc67c0c0201bfa9913fef66c82918641019402ebb8b02b79180f7b87c0bab2 Started:0xc0020651d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000b9f530} {Name:kube-api-access-fjvjc MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc000b9f540}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0906 18:30:32.905141   13823 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.911075   13823 pod_ready.go:93] pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.911105   13823 pod_ready.go:82] duration metric: took 5.954486ms for pod "coredns-6f6b679f8f-d5d26" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.911119   13823 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.928213   13823 pod_ready.go:93] pod "etcd-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.928234   13823 pod_ready.go:82] duration metric: took 17.107089ms for pod "etcd-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.928244   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.942443   13823 pod_ready.go:93] pod "kube-apiserver-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.942474   13823 pod_ready.go:82] duration metric: took 14.222157ms for pod "kube-apiserver-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.942489   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.948544   13823 pod_ready.go:93] pod "kube-controller-manager-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:32.948568   13823 pod_ready.go:82] duration metric: took 6.069443ms for pod "kube-controller-manager-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.948594   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-df5wg" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:32.995554   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:32.996027   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:33.270077   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:33.300133   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:33.300322   13823 pod_ready.go:93] pod "kube-proxy-df5wg" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:33.300343   13823 pod_ready.go:82] duration metric: took 351.740369ms for pod "kube-proxy-df5wg" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.300356   13823 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.494781   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:33.495847   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:33.701424   13823 pod_ready.go:93] pod "kube-scheduler-addons-959832" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:33.701467   13823 pod_ready.go:82] duration metric: took 401.098684ms for pod "kube-scheduler-addons-959832" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.701495   13823 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.769360   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:33.798021   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:33.995683   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:33.997103   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.270015   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:34.299221   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:34.495406   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:34.496126   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.770094   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:34.799237   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:34.996508   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:34.997585   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:35.270568   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:35.299394   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:35.495141   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:35.495320   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:35.707531   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:35.770986   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:35.800293   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:35.996725   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:35.997639   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:36.270981   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:36.303214   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:36.494976   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:36.496783   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.771081   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:36.799874   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:36.995676   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:36.996010   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:37.270120   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:37.299046   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:37.494705   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:37.496067   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.707603   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:37.769678   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:37.798583   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:37.995037   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:37.995885   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:38.269217   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:38.298643   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:38.495448   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:38.495856   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:38.769730   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:38.799711   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.083640   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.083787   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:39.496519   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:39.496908   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:39.497701   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.499783   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.769883   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:39.798544   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:39.994338   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:39.995398   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:40.209006   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:40.272568   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:40.301397   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.498136   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.498526   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:40.770814   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:40.798522   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:40.994052   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:40.995394   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.270657   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:41.298770   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.498318   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.498596   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:41.770854   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:41.799666   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:41.995027   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.995612   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.270017   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:42.299094   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.592984   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.595535   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:42.721960   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:42.772381   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:42.799751   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:42.995172   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:42.995508   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.272873   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:43.298467   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.494939   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.495402   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.769785   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:43.798713   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.996443   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.996744   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.269175   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:44.308002   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.494478   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:44.494986   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.770210   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:44.797768   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.995782   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.997472   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.207350   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:45.269487   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:45.298388   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.494409   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.494479   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.769970   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:45.798375   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.995583   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.995736   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.269632   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:46.299154   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.495331   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.495578   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:46.769857   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:46.799172   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.995967   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.996352   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.207412   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:47.270222   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:47.300058   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.501228   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:47.501496   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.769887   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:47.798711   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.994453   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.994618   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.270499   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:48.298587   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.494874   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.494941   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.771487   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:48.799341   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.995078   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.995997   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.270055   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:49.297759   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.493704   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.496397   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.707766   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:49.769942   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:49.799020   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.994521   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.995871   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.269405   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:50.298442   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.495620   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:50.496486   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.876382   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:50.877156   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.996700   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.996938   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.269377   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:51.298953   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.495015   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.495481   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.708764   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:51.770620   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:51.798067   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.994702   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.995528   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.269440   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:52.298688   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.496129   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.497284   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:52.769844   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:52.799404   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.995549   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.995828   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.272511   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:53.299182   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.495690   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.498212   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.769884   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:53.799759   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.994840   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.994970   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.208168   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:54.270994   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:54.301366   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.494638   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:54.495314   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.769283   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:54.797866   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.272696   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.272743   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:55.272998   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.298147   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.495547   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.495711   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.770496   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:55.802302   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.995386   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.995623   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:56.268801   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:56.298461   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.494963   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.495882   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.291534   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.291868   13823 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:57.292073   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:57.292099   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.293348   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.309051   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:57.309858   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.312884   13823 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace has status "Ready":"True"
	I0906 18:30:57.312900   13823 pod_ready.go:82] duration metric: took 23.611395425s for pod "nvidia-device-plugin-daemonset-nsxpz" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:57.312922   13823 pod_ready.go:39] duration metric: took 33.495084445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:57.312943   13823 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:30:57.312998   13823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:30:57.342569   13823 api_server.go:72] duration metric: took 39.503199537s to wait for apiserver process to appear ...
	I0906 18:30:57.342597   13823 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:30:57.342618   13823 api_server.go:253] Checking apiserver healthz at https://192.168.39.98:8443/healthz ...
	I0906 18:30:57.347032   13823 api_server.go:279] https://192.168.39.98:8443/healthz returned 200:
	ok
	I0906 18:30:57.348263   13823 api_server.go:141] control plane version: v1.31.0
	I0906 18:30:57.348287   13823 api_server.go:131] duration metric: took 5.682402ms to wait for apiserver health ...
	I0906 18:30:57.348297   13823 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:30:57.359723   13823 system_pods.go:59] 18 kube-system pods found
	I0906 18:30:57.359757   13823 system_pods.go:61] "coredns-6f6b679f8f-d5d26" [8f56a285-a4a2-42b2-b904-86d4b92e1593] Running
	I0906 18:30:57.359769   13823 system_pods.go:61] "csi-hostpath-attacher-0" [077a752a-2398-4e94-b907-d0888261774c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:57.359778   13823 system_pods.go:61] "csi-hostpath-resizer-0" [4d49487b-d00b-4ee7-8007-fc440aad009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:30:57.359790   13823 system_pods.go:61] "csi-hostpathplugin-j7df9" [146029b8-76c4-479b-8217-00a90921e5d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:57.359800   13823 system_pods.go:61] "etcd-addons-959832" [2517086a-0030-456f-a07a-8973652d205c] Running
	I0906 18:30:57.359806   13823 system_pods.go:61] "kube-apiserver-addons-959832" [c93b4ce0-62b0-4e1f-9a98-76b6e7ad4fbc] Running
	I0906 18:30:57.359815   13823 system_pods.go:61] "kube-controller-manager-addons-959832" [3dc3e2e0-cdf7-4d83-8d8e-5cc86d87c45b] Running
	I0906 18:30:57.359820   13823 system_pods.go:61] "kube-ingress-dns-minikube" [1673a19c-a4a9-4d9d-bda1-e073fb44b3d8] Running
	I0906 18:30:57.359826   13823 system_pods.go:61] "kube-proxy-df5wg" [f92f8a67-fa25-410a-b7f6-928c602e53e5] Running
	I0906 18:30:57.359829   13823 system_pods.go:61] "kube-scheduler-addons-959832" [0a2458fe-333d-4ca7-b2ab-c58159f3a491] Running
	I0906 18:30:57.359834   13823 system_pods.go:61] "metrics-server-84c5f94fbc-flnx5" [01d423d8-1a69-47b2-be5a-57dc6f3f7268] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:57.359840   13823 system_pods.go:61] "nvidia-device-plugin-daemonset-nsxpz" [c35f7718-6879-4edb-9a8b-5b4a82ad2a7c] Running
	I0906 18:30:57.359846   13823 system_pods.go:61] "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:57.359852   13823 system_pods.go:61] "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:57.359858   13823 system_pods.go:61] "snapshot-controller-56fcc65765-db2j5" [afcb8d14-41d7-444b-b16d-496ca520ee39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.359867   13823 system_pods.go:61] "snapshot-controller-56fcc65765-jjdrv" [d3df181f-bfa3-4ef4-9767-ecc84c335cc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.359871   13823 system_pods.go:61] "storage-provisioner" [a837ebf7-7140-4baa-8b93-ea556996b204] Running
	I0906 18:30:57.359877   13823 system_pods.go:61] "tiller-deploy-b48cc5f79-d2ggh" [5951b042-9892-4eb8-b567-933475c4a163] Running
	I0906 18:30:57.359885   13823 system_pods.go:74] duration metric: took 11.581782ms to wait for pod list to return data ...
	I0906 18:30:57.359894   13823 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:30:57.364154   13823 default_sa.go:45] found service account: "default"
	I0906 18:30:57.364173   13823 default_sa.go:55] duration metric: took 4.273217ms for default service account to be created ...
	I0906 18:30:57.364181   13823 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:30:57.373118   13823 system_pods.go:86] 18 kube-system pods found
	I0906 18:30:57.373150   13823 system_pods.go:89] "coredns-6f6b679f8f-d5d26" [8f56a285-a4a2-42b2-b904-86d4b92e1593] Running
	I0906 18:30:57.373165   13823 system_pods.go:89] "csi-hostpath-attacher-0" [077a752a-2398-4e94-b907-d0888261774c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:30:57.373175   13823 system_pods.go:89] "csi-hostpath-resizer-0" [4d49487b-d00b-4ee7-8007-fc440aad009e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:30:57.373194   13823 system_pods.go:89] "csi-hostpathplugin-j7df9" [146029b8-76c4-479b-8217-00a90921e5d0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:30:57.373202   13823 system_pods.go:89] "etcd-addons-959832" [2517086a-0030-456f-a07a-8973652d205c] Running
	I0906 18:30:57.373217   13823 system_pods.go:89] "kube-apiserver-addons-959832" [c93b4ce0-62b0-4e1f-9a98-76b6e7ad4fbc] Running
	I0906 18:30:57.373223   13823 system_pods.go:89] "kube-controller-manager-addons-959832" [3dc3e2e0-cdf7-4d83-8d8e-5cc86d87c45b] Running
	I0906 18:30:57.373227   13823 system_pods.go:89] "kube-ingress-dns-minikube" [1673a19c-a4a9-4d9d-bda1-e073fb44b3d8] Running
	I0906 18:30:57.373230   13823 system_pods.go:89] "kube-proxy-df5wg" [f92f8a67-fa25-410a-b7f6-928c602e53e5] Running
	I0906 18:30:57.373237   13823 system_pods.go:89] "kube-scheduler-addons-959832" [0a2458fe-333d-4ca7-b2ab-c58159f3a491] Running
	I0906 18:30:57.373242   13823 system_pods.go:89] "metrics-server-84c5f94fbc-flnx5" [01d423d8-1a69-47b2-be5a-57dc6f3f7268] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:30:57.373246   13823 system_pods.go:89] "nvidia-device-plugin-daemonset-nsxpz" [c35f7718-6879-4edb-9a8b-5b4a82ad2a7c] Running
	I0906 18:30:57.373252   13823 system_pods.go:89] "registry-6fb4cdfc84-4hp57" [995000c4-356d-4aee-b8b4-6c719240ca26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:30:57.373257   13823 system_pods.go:89] "registry-proxy-5jxb2" [8ea39930-6a75-4ad5-a074-233a2b95f98f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:30:57.373264   13823 system_pods.go:89] "snapshot-controller-56fcc65765-db2j5" [afcb8d14-41d7-444b-b16d-496ca520ee39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.373273   13823 system_pods.go:89] "snapshot-controller-56fcc65765-jjdrv" [d3df181f-bfa3-4ef4-9767-ecc84c335cc4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:30:57.373280   13823 system_pods.go:89] "storage-provisioner" [a837ebf7-7140-4baa-8b93-ea556996b204] Running
	I0906 18:30:57.373287   13823 system_pods.go:89] "tiller-deploy-b48cc5f79-d2ggh" [5951b042-9892-4eb8-b567-933475c4a163] Running
	I0906 18:30:57.373299   13823 system_pods.go:126] duration metric: took 9.109597ms to wait for k8s-apps to be running ...
	I0906 18:30:57.373309   13823 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:30:57.373355   13823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:30:57.425478   13823 system_svc.go:56] duration metric: took 52.162346ms WaitForService to wait for kubelet
	I0906 18:30:57.425503   13823 kubeadm.go:582] duration metric: took 39.586136805s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:30:57.425533   13823 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:30:57.428818   13823 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:30:57.428842   13823 node_conditions.go:123] node cpu capacity is 2
	I0906 18:30:57.428863   13823 node_conditions.go:105] duration metric: took 3.314164ms to run NodePressure ...
	I0906 18:30:57.428878   13823 start.go:241] waiting for startup goroutines ...
	I0906 18:30:57.495273   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.495869   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.769593   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:57.798564   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.995122   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.995468   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.270153   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:58.299032   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.495028   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.495638   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.770199   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:58.797952   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.994635   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.995409   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.269612   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:59.298532   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.494666   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.495202   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:59.769637   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:30:59.799716   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.995110   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.997059   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.269925   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:00.299168   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.495168   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.495452   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.769831   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:00.798879   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.994356   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.995338   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:01.270323   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:01.298809   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.497749   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:01.509994   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.196171   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:02.197232   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.197446   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.198219   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.269772   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:02.299913   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.495441   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.496083   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.770038   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:02.800728   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.995143   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.995393   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.269175   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:03.298453   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.495672   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.495941   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.769214   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:03.798100   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.996193   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.996547   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.270229   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:04.300339   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.495048   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:04.495208   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.769698   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:04.798488   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.000395   13823 kapi.go:107] duration metric: took 37.509684094s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 18:31:05.000674   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.270104   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:05.297638   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.495343   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.770543   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:05.800954   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.994937   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.270489   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:06.299401   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.495523   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.775824   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:06.804605   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.000907   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.281094   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:07.306915   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.818623   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:07.820944   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.821122   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.994968   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.269992   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:08.298837   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.493945   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.769482   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:08.798377   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.994691   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.269835   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:09.299230   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.502957   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.769997   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:09.798765   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.127650   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.275919   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:10.300104   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.495617   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.769823   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:10.798656   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.995288   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.270073   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:11.299546   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.494131   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.771059   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:11.799920   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.995856   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.274737   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:12.299392   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.494262   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.769625   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:12.798619   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.995358   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.316812   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:13.317852   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.495815   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.769181   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:13.799259   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.995199   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.276613   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:14.379012   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.494898   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.770331   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:14.798773   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.995445   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.272540   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:15.301141   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.495285   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.770353   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:15.798730   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.994520   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.270657   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:16.300620   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.494263   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.770371   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:16.799256   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.994749   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.269747   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:17.298951   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.494719   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.769832   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:17.799470   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.994977   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.269720   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:18.310969   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.494867   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.769348   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:18.798225   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.994850   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.282829   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:19.384038   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.497045   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.770599   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:19.801611   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.996550   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.270037   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:20.311775   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.498768   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.769965   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:20.799204   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.997161   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.270035   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:21.299010   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.494660   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.769290   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:21.798619   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.994674   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.269883   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:22.300295   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.496723   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.771097   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:22.799152   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.013066   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.270485   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:23.299028   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.496372   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.770017   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:23.801362   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.996357   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:24.270445   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:24.299776   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:24.494072   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.030314   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:25.030783   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.031442   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.269910   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:25.371610   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.494715   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.770973   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:25.799735   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.994854   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.270976   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:26.299500   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.494510   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.770729   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:26.873976   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.993699   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.269916   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:27.299203   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:27.494353   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.771154   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:27.798428   13823 kapi.go:107] duration metric: took 58.504619679s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 18:31:27.996381   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.271088   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:28.493970   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.769758   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:28.994788   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.271720   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:29.496574   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.770127   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:29.994752   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.464639   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:30.495124   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.770101   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:30.995408   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.270144   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:31.495730   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.769464   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:31.996345   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.269861   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:32.495930   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.768939   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:32.996483   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.269235   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:33.494459   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.769303   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:33.994740   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.270162   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:34.494209   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.772239   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:34.995450   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.270037   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:35.494858   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.770518   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:35.994084   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.270405   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:36.496230   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.770326   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:36.994330   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.270147   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:37.493620   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.778857   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:38.113592   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.270475   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:38.494284   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.769614   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:39.006516   13823 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:39.273731   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:39.495548   13823 kapi.go:107] duration metric: took 1m12.005524271s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 18:31:39.770852   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:40.269133   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:40.769688   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:41.270179   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:41.769459   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:42.270714   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:42.770252   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:43.270294   13823 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:31:43.770209   13823 kapi.go:107] duration metric: took 1m12.504314576s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 18:31:43.771902   13823 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-959832 cluster.
	I0906 18:31:43.773493   13823 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 18:31:43.774994   13823 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 18:31:43.776439   13823 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, default-storageclass, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, helm-tiller, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0906 18:31:43.778228   13823 addons.go:510] duration metric: took 1m25.938813235s for enable addons: enabled=[storage-provisioner ingress-dns default-storageclass nvidia-device-plugin cloud-spanner metrics-server inspektor-gadget helm-tiller yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0906 18:31:43.778280   13823 start.go:246] waiting for cluster config update ...
	I0906 18:31:43.778303   13823 start.go:255] writing updated cluster config ...
	I0906 18:31:43.778560   13823 ssh_runner.go:195] Run: rm -f paused
	I0906 18:31:43.828681   13823 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 18:31:43.830792   13823 out.go:177] * Done! kubectl is now configured to use "addons-959832" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.473328817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648301473304441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=207fcb8e-9f3e-4a2b-b524-fa80a1248405 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.473822308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a61e9063-08f9-49cd-97e1-2cb46635ef99 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.473893577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a61e9063-08f9-49cd-97e1-2cb46635ef99 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.474174357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a77b0e39569e432e0f0abecfc0d0dc295080be9aacb137b80a5872ba71c5293,PodSandboxId:fe6b4e93538c724751157666e66ca4abedbda45f250c23a4f6257b68cdebda47,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725648151296825106,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-d7bkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4254132-d806-4728-8fb3-6eb98f48b868,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c6
9cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e486
87f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-42b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0457335
66833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f5
4c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca504
8cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a61e9063-08f9-49cd-97e1-2cb46635ef99 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.511760689Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c9248e1-6941-4059-a39d-97f43d6e7595 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.511854496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c9248e1-6941-4059-a39d-97f43d6e7595 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.512940075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c18e4871-fe1a-4f3f-a2fb-417008834566 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.514343840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648301514320283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c18e4871-fe1a-4f3f-a2fb-417008834566 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.515678459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5a3bb8d-b54f-4df7-a340-712631ffe358 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.515745881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5a3bb8d-b54f-4df7-a340-712631ffe358 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.516019423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a77b0e39569e432e0f0abecfc0d0dc295080be9aacb137b80a5872ba71c5293,PodSandboxId:fe6b4e93538c724751157666e66ca4abedbda45f250c23a4f6257b68cdebda47,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725648151296825106,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-d7bkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4254132-d806-4728-8fb3-6eb98f48b868,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c6
9cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e486
87f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-42b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0457335
66833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f5
4c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca504
8cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5a3bb8d-b54f-4df7-a340-712631ffe358 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.552983549Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7660dc5b-f723-4139-98db-662fab203419 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.553073312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7660dc5b-f723-4139-98db-662fab203419 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.554357033Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9aa10299-16fc-48f6-bb6c-86f876c49aa2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.555595363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648301555568109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9aa10299-16fc-48f6-bb6c-86f876c49aa2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.556137714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29814dc7-41b2-4c36-853f-3b39782c31ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.556212080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29814dc7-41b2-4c36-853f-3b39782c31ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.556527419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a77b0e39569e432e0f0abecfc0d0dc295080be9aacb137b80a5872ba71c5293,PodSandboxId:fe6b4e93538c724751157666e66ca4abedbda45f250c23a4f6257b68cdebda47,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725648151296825106,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-d7bkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4254132-d806-4728-8fb3-6eb98f48b868,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c6
9cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e486
87f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-42b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0457335
66833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f5
4c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca504
8cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29814dc7-41b2-4c36-853f-3b39782c31ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.597345651Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2aa4e54-b20a-4bfd-aec4-cdf843add147 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.597459955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2aa4e54-b20a-4bfd-aec4-cdf843add147 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.598347283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4aa4e318-c002-4327-b8d3-cd11aae953fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.599794221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648301599767099,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4aa4e318-c002-4327-b8d3-cd11aae953fa name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.600920017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f2a00a2-8f9e-4b8b-9035-a597f8ddf061 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.601027700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f2a00a2-8f9e-4b8b-9035-a597f8ddf061 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:45:01 addons-959832 crio[670]: time="2024-09-06 18:45:01.601638693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2a77b0e39569e432e0f0abecfc0d0dc295080be9aacb137b80a5872ba71c5293,PodSandboxId:fe6b4e93538c724751157666e66ca4abedbda45f250c23a4f6257b68cdebda47,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1725648151296825106,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-d7bkf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4254132-d806-4728-8fb3-6eb98f48b868,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ff4cd5a201009ea6af4ce0364f38b4793a14149dc1c5249b1fa61a043a41b9,PodSandboxId:e9d551110687aba8994d23d47511ea0805745dac7b53d3d563abd76d8864df9b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1725648014702855117,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d21e1ab5-c3ed-4c03-9a60-7b9908550e31,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961,PodSandboxId:6009e3b23d6b9d8c453faf6cf70725c5cc8e36ce18d3bde895b9cc1434ce97a7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1725647502516117138,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-wbp4z,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: cf54422d-d65f-4c6f-b4c6-4a8f1906e822,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbdca73cd5f41dc19073362525a00dc3f34a7b118a1eced2f1f60f50f10d8174,PodSandboxId:ebd17a7bfd07d499a53505e299b14ead4e68983d26d2f04c474b3eb82f514655,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1725647465857245191,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-flnx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01d423d8-1a69-47b2-be5a-57dc6f3f7268,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b5740dfd945acf05ba340f3cafc9ef87553fae775557858bb5b0f655ade4,PodSandboxId:bb57b9b0a87b03923d94f4373a3bb978de34b066e2a1963bdc171f668e038ed8,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c6
9cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1725647457395940646,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-wmllc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d4255597-ad63-4381-a87e-0feac7b3d381,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120,PodSandboxId:fb03fe115a315da7217279cac10297d1cf9d3342a00125ba8ae3ec4838bb50b0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725647425386516989,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a837ebf7-7140-4baa-8b93-ea556996b204,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025,PodSandboxId:cf16f9b0ce0a6d76dcb3c273ffcf89e46468172e4a354713fdb83f146f33c736,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e486
87f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725647422486143182,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5d26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f56a285-a4a2-42b2-b904-86d4b92e1593,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f,PodSandboxId:a16d4e27651e79251e703049c2b44e8f6646848facecf048c4c78714faa79b55,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725647420019743430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f92f8a67-fa25-410a-b7f6-928c602e53e5,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49,PodSandboxId:08d02ee1f1b83c6c0903e2dd6206fcf383df21d3829fbb520f087eae29ba41f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0457335
66833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725647408046879114,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c1bc64573e9c4b470d641f7ff2c70f,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832,PodSandboxId:3810e200d7f2cb00a9b9f1c7108f70277369ee23fdc4f357a599c490d4ec2842,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f5
4c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725647408042170824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 182bbb480465c60eefa353c0707151f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d,PodSandboxId:1340e66e90fd2e2c0fb43f1c87f21abc2308ccae5eeef0a3805358a22397cf85,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca504
8cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725647408033351290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60955b53099907772dd53e04a09b628,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9,PodSandboxId:6a4a01ed6ac2784ecf41dcd4ff3622f6d3e995eccec68b8f604952c0317c802c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annot
ations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725647407961011319,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-959832,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b72927349b6116fbc750d9943b9c706,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f2a00a2-8f9e-4b8b-9035-a597f8ddf061 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2a77b0e39569e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   fe6b4e93538c7       hello-world-app-55bf9c44b4-d7bkf
	47ff4cd5a2010       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         4 minutes ago       Running             nginx                     0                   e9d551110687a       nginx
	bff22acf8afe6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            13 minutes ago      Running             gcp-auth                  0                   6009e3b23d6b9       gcp-auth-89d5ffd79-wbp4z
	dbdca73cd5f41       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   13 minutes ago      Running             metrics-server            0                   ebd17a7bfd07d       metrics-server-84c5f94fbc-flnx5
	d8e6b5740dfd9       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        14 minutes ago      Running             local-path-provisioner    0                   bb57b9b0a87b0       local-path-provisioner-86d989889c-wmllc
	095caffa96df4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago      Running             storage-provisioner       0                   fb03fe115a315       storage-provisioner
	daf771eda93ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        14 minutes ago      Running             coredns                   0                   cf16f9b0ce0a6       coredns-6f6b679f8f-d5d26
	f62f176bebb98       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        14 minutes ago      Running             kube-proxy                0                   a16d4e27651e7       kube-proxy-df5wg
	0976f654c6450       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        14 minutes ago      Running             kube-controller-manager   0                   08d02ee1f1b83       kube-controller-manager-addons-959832
	0062bd6dff511       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        14 minutes ago      Running             kube-scheduler            0                   3810e200d7f2c       kube-scheduler-addons-959832
	14011f30e4b49       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        14 minutes ago      Running             etcd                      0                   1340e66e90fd2       etcd-addons-959832
	f03b3137e10ab       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        14 minutes ago      Running             kube-apiserver            0                   6a4a01ed6ac27       kube-apiserver-addons-959832
	
	
	==> coredns [daf771eda93ba59310506c84dab2136e5d50fcf9f39453e9cee2fb14ff88a025] <==
	[INFO] 10.244.0.8:53109 - 30493 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000031299s
	[INFO] 10.244.0.8:51164 - 21323 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000073777s
	[INFO] 10.244.0.8:51164 - 9807 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00003634s
	[INFO] 10.244.0.8:33912 - 61080 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030797s
	[INFO] 10.244.0.8:33912 - 53146 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000256s
	[INFO] 10.244.0.8:51671 - 8759 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027086s
	[INFO] 10.244.0.8:51671 - 2357 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069078s
	[INFO] 10.244.0.8:58937 - 47939 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000029815s
	[INFO] 10.244.0.8:58937 - 55677 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000025038s
	[INFO] 10.244.0.8:59574 - 33097 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000055434s
	[INFO] 10.244.0.8:59574 - 49222 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000032883s
	[INFO] 10.244.0.8:34345 - 33033 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025905s
	[INFO] 10.244.0.8:34345 - 61711 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025782s
	[INFO] 10.244.0.8:40854 - 19935 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000024436s
	[INFO] 10.244.0.8:40854 - 16861 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000022079s
	[INFO] 10.244.0.8:54975 - 41823 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000033452s
	[INFO] 10.244.0.8:54975 - 6745 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041358s
	[INFO] 10.244.0.22:39608 - 5840 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000623407s
	[INFO] 10.244.0.22:47451 - 10373 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000773196s
	[INFO] 10.244.0.22:47147 - 43920 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096203s
	[INFO] 10.244.0.22:37201 - 19027 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000052062s
	[INFO] 10.244.0.22:51583 - 38377 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070102s
	[INFO] 10.244.0.22:37854 - 16491 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000049501s
	[INFO] 10.244.0.22:55914 - 7247 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000846443s
	[INFO] 10.244.0.22:51764 - 46657 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001169257s
	
	
	==> describe nodes <==
	Name:               addons-959832
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-959832
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=addons-959832
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_30_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-959832
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:30:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-959832
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:44:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:42:49 +0000   Fri, 06 Sep 2024 18:30:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:42:49 +0000   Fri, 06 Sep 2024 18:30:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:42:49 +0000   Fri, 06 Sep 2024 18:30:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:42:49 +0000   Fri, 06 Sep 2024 18:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    addons-959832
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 789fcfcd81af4b61a593ac3d592db28c
	  System UUID:                789fcfcd-81af-4b61-a593-ac3d592db28c
	  Boot ID:                    ca224247-03d2-489f-a0b8-0a2fbb84d9da
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-d7bkf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  gcp-auth                    gcp-auth-89d5ffd79-wbp4z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-6f6b679f8f-d5d26                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-addons-959832                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         14m
	  kube-system                 kube-apiserver-addons-959832               250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-959832      200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-df5wg                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-959832               100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-wmllc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node addons-959832 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node addons-959832 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node addons-959832 status is now: NodeHasSufficientPID
	  Normal  NodeReady                14m   kubelet          Node addons-959832 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node addons-959832 event: Registered Node addons-959832 in Controller
	
	
	==> dmesg <==
	[Sep 6 18:31] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.023954] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.411470] kauditd_printk_skb: 60 callbacks suppressed
	[  +6.032630] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.000760] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.371405] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.464629] kauditd_printk_skb: 42 callbacks suppressed
	[  +9.171733] kauditd_printk_skb: 9 callbacks suppressed
	[Sep 6 18:32] kauditd_printk_skb: 30 callbacks suppressed
	[Sep 6 18:34] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:37] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:39] kauditd_printk_skb: 28 callbacks suppressed
	[Sep 6 18:40] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.061671] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.069446] kauditd_printk_skb: 25 callbacks suppressed
	[  +8.609090] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.878882] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.370924] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.422494] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.580656] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.557034] kauditd_printk_skb: 4 callbacks suppressed
	[Sep 6 18:41] kauditd_printk_skb: 42 callbacks suppressed
	[  +6.420844] kauditd_printk_skb: 9 callbacks suppressed
	[Sep 6 18:42] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.637554] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [14011f30e4b49ec90382d774b0087d4f1086dffb1bbe260740f79ec2db40c84d] <==
	{"level":"info","ts":"2024-09-06T18:31:30.449052Z","caller":"traceutil/trace.go:171","msg":"trace[147865116] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"195.658402ms","start":"2024-09-06T18:31:30.253384Z","end":"2024-09-06T18:31:30.449042Z","steps":["trace[147865116] 'process raft request'  (duration: 195.381086ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:31:30.449255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.027216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:30.449308Z","caller":"traceutil/trace.go:171","msg":"trace[1936020184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"194.091492ms","start":"2024-09-06T18:31:30.255208Z","end":"2024-09-06T18:31:30.449299Z","steps":["trace[1936020184] 'agreement among raft nodes before linearized reading'  (duration: 194.016579ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:38.095195Z","caller":"traceutil/trace.go:171","msg":"trace[688394279] linearizableReadLoop","detail":"{readStateIndex:1162; appliedIndex:1161; }","duration":"115.853572ms","start":"2024-09-06T18:31:37.979325Z","end":"2024-09-06T18:31:38.095179Z","steps":["trace[688394279] 'read index received'  (duration: 115.687137ms)","trace[688394279] 'applied index is now lower than readState.Index'  (duration: 165.625µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-06T18:31:38.095479Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.064057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:31:38.095541Z","caller":"traceutil/trace.go:171","msg":"trace[1813618553] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1130; }","duration":"116.211558ms","start":"2024-09-06T18:31:37.979321Z","end":"2024-09-06T18:31:38.095532Z","steps":["trace[1813618553] 'agreement among raft nodes before linearized reading'  (duration: 116.005384ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:38.095837Z","caller":"traceutil/trace.go:171","msg":"trace[2080125568] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"147.639748ms","start":"2024-09-06T18:31:37.948183Z","end":"2024-09-06T18:31:38.095822Z","steps":["trace[2080125568] 'process raft request'  (duration: 146.880754ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:31:42.416683Z","caller":"traceutil/trace.go:171","msg":"trace[91810177] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"156.247568ms","start":"2024-09-06T18:31:42.260415Z","end":"2024-09-06T18:31:42.416663Z","steps":["trace[91810177] 'process raft request'  (duration: 155.748211ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:40:07.229181Z","caller":"traceutil/trace.go:171","msg":"trace[484312089] linearizableReadLoop","detail":"{readStateIndex:2159; appliedIndex:2158; }","duration":"409.788256ms","start":"2024-09-06T18:40:06.819346Z","end":"2024-09-06T18:40:07.229135Z","steps":["trace[484312089] 'read index received'  (duration: 409.628912ms)","trace[484312089] 'applied index is now lower than readState.Index'  (duration: 158.846µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-06T18:40:07.229379Z","caller":"traceutil/trace.go:171","msg":"trace[1656832041] transaction","detail":"{read_only:false; response_revision:2017; number_of_response:1; }","duration":"491.002048ms","start":"2024-09-06T18:40:06.738356Z","end":"2024-09-06T18:40:07.229358Z","steps":["trace[1656832041] 'process raft request'  (duration: 490.652338ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:40:07.229604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.584673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:40:07.229643Z","caller":"traceutil/trace.go:171","msg":"trace[1915074209] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2017; }","duration":"248.626111ms","start":"2024-09-06T18:40:06.981009Z","end":"2024-09-06T18:40:07.229635Z","steps":["trace[1915074209] 'agreement among raft nodes before linearized reading'  (duration: 248.574709ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:40:07.229740Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:40:06.738339Z","time spent":"491.264052ms","remote":"127.0.0.1:39516","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-959832\" mod_revision:1958 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-959832\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-959832\" > >"}
	{"level":"warn","ts":"2024-09-06T18:40:07.229558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"410.139686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-06T18:40:07.229900Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.345839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-06T18:40:07.229941Z","caller":"traceutil/trace.go:171","msg":"trace[1213588532] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2017; }","duration":"183.385298ms","start":"2024-09-06T18:40:07.046548Z","end":"2024-09-06T18:40:07.229933Z","steps":["trace[1213588532] 'agreement among raft nodes before linearized reading'  (duration: 183.300185ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:40:07.229918Z","caller":"traceutil/trace.go:171","msg":"trace[1459748069] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2017; }","duration":"410.570505ms","start":"2024-09-06T18:40:06.819339Z","end":"2024-09-06T18:40:07.229910Z","steps":["trace[1459748069] 'agreement among raft nodes before linearized reading'  (duration: 410.06832ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T18:40:07.230002Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T18:40:06.819307Z","time spent":"410.688119ms","remote":"127.0.0.1:39260","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-09-06T18:40:09.281386Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1536}
	{"level":"info","ts":"2024-09-06T18:40:09.333184Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1536,"took":"51.266331ms","hash":4192817885,"current-db-size-bytes":6647808,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3444736,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-06T18:40:09.333251Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4192817885,"revision":1536,"compact-revision":-1}
	{"level":"info","ts":"2024-09-06T18:41:05.745354Z","caller":"traceutil/trace.go:171","msg":"trace[486873728] transaction","detail":"{read_only:false; response_revision:2438; number_of_response:1; }","duration":"152.706273ms","start":"2024-09-06T18:41:05.592614Z","end":"2024-09-06T18:41:05.745320Z","steps":["trace[486873728] 'process raft request'  (duration: 152.60606ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-06T18:41:37.968550Z","caller":"traceutil/trace.go:171","msg":"trace[849290624] linearizableReadLoop","detail":"{readStateIndex:2693; appliedIndex:2692; }","duration":"150.307732ms","start":"2024-09-06T18:41:37.818226Z","end":"2024-09-06T18:41:37.968534Z","steps":["trace[849290624] 'read index received'  (duration: 148.672472ms)","trace[849290624] 'applied index is now lower than readState.Index'  (duration: 1.634577ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-06T18:41:37.968874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.568984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T18:41:37.968936Z","caller":"traceutil/trace.go:171","msg":"trace[1335768279] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2517; }","duration":"150.706196ms","start":"2024-09-06T18:41:37.818222Z","end":"2024-09-06T18:41:37.968928Z","steps":["trace[1335768279] 'agreement among raft nodes before linearized reading'  (duration: 150.544871ms)"],"step_count":1}
	
	
	==> gcp-auth [bff22acf8afe6ce3451f82f051e3eed315de5e7150e4ac9b8d62df8a6a1be961] <==
	2024/09/06 18:31:44 Ready to write response ...
	2024/09/06 18:39:57 Ready to marshal response ...
	2024/09/06 18:39:57 Ready to write response ...
	2024/09/06 18:40:01 Ready to marshal response ...
	2024/09/06 18:40:01 Ready to write response ...
	2024/09/06 18:40:03 Ready to marshal response ...
	2024/09/06 18:40:03 Ready to write response ...
	2024/09/06 18:40:12 Ready to marshal response ...
	2024/09/06 18:40:12 Ready to write response ...
	2024/09/06 18:40:20 Ready to marshal response ...
	2024/09/06 18:40:20 Ready to write response ...
	2024/09/06 18:40:36 Ready to marshal response ...
	2024/09/06 18:40:36 Ready to write response ...
	2024/09/06 18:40:36 Ready to marshal response ...
	2024/09/06 18:40:36 Ready to write response ...
	2024/09/06 18:40:43 Ready to marshal response ...
	2024/09/06 18:40:43 Ready to write response ...
	2024/09/06 18:41:01 Ready to marshal response ...
	2024/09/06 18:41:01 Ready to write response ...
	2024/09/06 18:41:01 Ready to marshal response ...
	2024/09/06 18:41:01 Ready to write response ...
	2024/09/06 18:41:01 Ready to marshal response ...
	2024/09/06 18:41:01 Ready to write response ...
	2024/09/06 18:42:29 Ready to marshal response ...
	2024/09/06 18:42:29 Ready to write response ...
	
	
	==> kernel <==
	 18:45:02 up 15 min,  0 users,  load average: 0.23, 0.74, 0.64
	Linux addons-959832 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f03b3137e10ab8471f51a464e39a09ab1f9540ce8d582d85a9f0a696db14b3e9] <==
	E0906 18:32:14.711932       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0906 18:32:14.714123       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
	E0906 18:32:14.719474       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.186.155:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.186.155:443: connect: connection refused" logger="UnhandledError"
	I0906 18:32:14.784984       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0906 18:39:53.218243       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0906 18:39:54.261305       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0906 18:40:11.987036       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0906 18:40:12.163983       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.110.216"}
	I0906 18:40:13.051545       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0906 18:40:35.983222       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:35.983535       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:40:36.005118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:36.005246       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:40:36.035687       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:36.035737       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:40:36.054186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:40:36.054461       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0906 18:40:37.036569       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 18:40:37.057021       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0906 18:40:37.073802       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0906 18:41:01.741248       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.147.21"}
	I0906 18:42:30.199507       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.38.159"}
	
	
	==> kube-controller-manager [0976f654c6450231f8d8713b6cb6a9ad7d5d1293e842e1a0a28e46efae911c49] <==
	W0906 18:42:56.246194       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:56.246329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:02.163403       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:02.163521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:06.986522       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:06.986586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:30.488262       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:30.488503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:42.149343       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:42.149544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:46.565829       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:46.565883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:47.956034       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:47.956091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:44:18.577769       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:44:18.577926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:44:18.800400       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:44:18.800582       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:44:26.480164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:44:26.480277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:44:37.499270       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:44:37.499415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:45:00.528619       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="11.662µs"
	W0906 18:45:01.866132       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:45:01.866178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [f62f176bebb98fb659bd26dd2fcd8aaacbd327ba8a1d52fe265fd0af05fd8b6f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:30:20.895600       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:30:20.905684       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.98"]
	E0906 18:30:20.905767       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:30:20.981385       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:30:20.981522       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:30:20.981552       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:30:20.986309       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:30:20.986680       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:30:20.986707       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:30:20.988245       1 config.go:197] "Starting service config controller"
	I0906 18:30:20.988269       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:30:20.988299       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:30:20.988303       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:30:20.988869       1 config.go:326] "Starting node config controller"
	I0906 18:30:20.988881       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:30:21.089002       1 shared_informer.go:320] Caches are synced for node config
	I0906 18:30:21.089043       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:30:21.089077       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0062bd6dff5114e52bf85cc8bcbeb1209192735081baa2f7958e752600429832] <==
	W0906 18:30:10.632826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:10.632881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:10.632992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:30:10.633043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:10.633145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 18:30:10.633198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:10.633303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:30:10.633365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.559856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 18:30:11.559915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.591626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:11.591724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.593014       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:30:11.593712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.624825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 18:30:11.625533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.640090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:11.640140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.646831       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 18:30:11.646890       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0906 18:30:11.875922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 18:30:11.876131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:11.954173       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 18:30:11.954234       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0906 18:30:14.512534       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:44:13 addons-959832 kubelet[1215]: E0906 18:44:13.351077    1215 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 18:44:13 addons-959832 kubelet[1215]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 18:44:13 addons-959832 kubelet[1215]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 18:44:13 addons-959832 kubelet[1215]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 18:44:13 addons-959832 kubelet[1215]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 18:44:13 addons-959832 kubelet[1215]: E0906 18:44:13.848606    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648253848132917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:44:13 addons-959832 kubelet[1215]: E0906 18:44:13.848730    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648253848132917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:44:20 addons-959832 kubelet[1215]: E0906 18:44:20.337846    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1c130620-63bc-4232-b463-81e6378edb12"
	Sep 06 18:44:23 addons-959832 kubelet[1215]: E0906 18:44:23.851746    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648263851275551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:44:23 addons-959832 kubelet[1215]: E0906 18:44:23.851826    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648263851275551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:44:33 addons-959832 kubelet[1215]: E0906 18:44:33.338583    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1c130620-63bc-4232-b463-81e6378edb12"
	Sep 06 18:44:33 addons-959832 kubelet[1215]: E0906 18:44:33.854829    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648273854514314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:44:33 addons-959832 kubelet[1215]: E0906 18:44:33.854876    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648273854514314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:44:43 addons-959832 kubelet[1215]: E0906 18:44:43.857584    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648283857156712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:44:43 addons-959832 kubelet[1215]: E0906 18:44:43.857628    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648283857156712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:44:48 addons-959832 kubelet[1215]: E0906 18:44:48.337876    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1c130620-63bc-4232-b463-81e6378edb12"
	Sep 06 18:44:53 addons-959832 kubelet[1215]: E0906 18:44:53.860336    1215 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648293860023728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:44:53 addons-959832 kubelet[1215]: E0906 18:44:53.860648    1215 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648293860023728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579760,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:45:01 addons-959832 kubelet[1215]: E0906 18:45:01.337922    1215 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1c130620-63bc-4232-b463-81e6378edb12"
	Sep 06 18:45:01 addons-959832 kubelet[1215]: I0906 18:45:01.949697    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4dw9\" (UniqueName: \"kubernetes.io/projected/01d423d8-1a69-47b2-be5a-57dc6f3f7268-kube-api-access-n4dw9\") pod \"01d423d8-1a69-47b2-be5a-57dc6f3f7268\" (UID: \"01d423d8-1a69-47b2-be5a-57dc6f3f7268\") "
	Sep 06 18:45:01 addons-959832 kubelet[1215]: I0906 18:45:01.949758    1215 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/01d423d8-1a69-47b2-be5a-57dc6f3f7268-tmp-dir\") pod \"01d423d8-1a69-47b2-be5a-57dc6f3f7268\" (UID: \"01d423d8-1a69-47b2-be5a-57dc6f3f7268\") "
	Sep 06 18:45:01 addons-959832 kubelet[1215]: I0906 18:45:01.950153    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d423d8-1a69-47b2-be5a-57dc6f3f7268-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "01d423d8-1a69-47b2-be5a-57dc6f3f7268" (UID: "01d423d8-1a69-47b2-be5a-57dc6f3f7268"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 06 18:45:01 addons-959832 kubelet[1215]: I0906 18:45:01.959748    1215 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d423d8-1a69-47b2-be5a-57dc6f3f7268-kube-api-access-n4dw9" (OuterVolumeSpecName: "kube-api-access-n4dw9") pod "01d423d8-1a69-47b2-be5a-57dc6f3f7268" (UID: "01d423d8-1a69-47b2-be5a-57dc6f3f7268"). InnerVolumeSpecName "kube-api-access-n4dw9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:45:02 addons-959832 kubelet[1215]: I0906 18:45:02.050308    1215 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-n4dw9\" (UniqueName: \"kubernetes.io/projected/01d423d8-1a69-47b2-be5a-57dc6f3f7268-kube-api-access-n4dw9\") on node \"addons-959832\" DevicePath \"\""
	Sep 06 18:45:02 addons-959832 kubelet[1215]: I0906 18:45:02.050380    1215 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/01d423d8-1a69-47b2-be5a-57dc6f3f7268-tmp-dir\") on node \"addons-959832\" DevicePath \"\""
	
	
	==> storage-provisioner [095caffa96df436709672023c8d90d08dc7c526203f0df410664c09842e71120] <==
	I0906 18:30:26.339092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:30:26.364532       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:30:26.364614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:30:26.389908       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:30:26.390911       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3!
	I0906 18:30:26.391024       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c870ecaa-1488-487e-a063-0e518015e13e", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3 became leader
	I0906 18:30:26.492036       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-959832_62830d6f-023a-411e-acc8-7eff326e33b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-959832 -n addons-959832
helpers_test.go:261: (dbg) Run:  kubectl --context addons-959832 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-959832 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-959832 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-959832/192.168.39.98
	Start Time:       Fri, 06 Sep 2024 18:31:44 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n8sxx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n8sxx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-959832
	  Normal   Pulling    11m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m13s (x42 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (316.19s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-linux-amd64 license: exit status 40 (113.34004ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 node stop m02 -v=7 --alsologtostderr
E0906 18:55:30.161310   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:56:11.123225   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:56:44.179194   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.466992965s)

                                                
                                                
-- stdout --
	* Stopping node "ha-313128-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:55:18.432207   28683 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:55:18.432377   28683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:55:18.432387   28683 out.go:358] Setting ErrFile to fd 2...
	I0906 18:55:18.432391   28683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:55:18.432554   28683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:55:18.432774   28683 mustload.go:65] Loading cluster: ha-313128
	I0906 18:55:18.433168   28683 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:55:18.433184   28683 stop.go:39] StopHost: ha-313128-m02
	I0906 18:55:18.433556   28683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:55:18.433600   28683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:55:18.449086   28683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42701
	I0906 18:55:18.449580   28683 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:55:18.450130   28683 main.go:141] libmachine: Using API Version  1
	I0906 18:55:18.450157   28683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:55:18.450508   28683 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:55:18.452639   28683 out.go:177] * Stopping node "ha-313128-m02"  ...
	I0906 18:55:18.453874   28683 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 18:55:18.453920   28683 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:55:18.454160   28683 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 18:55:18.454182   28683 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:55:18.457552   28683 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:55:18.457966   28683 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:55:18.457992   28683 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:55:18.458129   28683 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:55:18.458300   28683 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:55:18.458471   28683 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:55:18.458625   28683 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 18:55:18.540252   28683 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0906 18:55:18.593504   28683 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0906 18:55:18.649237   28683 main.go:141] libmachine: Stopping "ha-313128-m02"...
	I0906 18:55:18.649273   28683 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:55:18.650691   28683 main.go:141] libmachine: (ha-313128-m02) Calling .Stop
	I0906 18:55:18.654305   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 0/120
	I0906 18:55:19.655637   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 1/120
	I0906 18:55:20.656889   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 2/120
	I0906 18:55:21.658319   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 3/120
	I0906 18:55:22.660393   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 4/120
	I0906 18:55:23.662314   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 5/120
	I0906 18:55:24.663691   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 6/120
	I0906 18:55:25.665189   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 7/120
	I0906 18:55:26.667211   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 8/120
	I0906 18:55:27.668713   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 9/120
	I0906 18:55:28.671063   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 10/120
	I0906 18:55:29.672578   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 11/120
	I0906 18:55:30.674320   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 12/120
	I0906 18:55:31.675808   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 13/120
	I0906 18:55:32.677416   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 14/120
	I0906 18:55:33.679920   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 15/120
	I0906 18:55:34.682461   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 16/120
	I0906 18:55:35.684380   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 17/120
	I0906 18:55:36.685974   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 18/120
	I0906 18:55:37.687524   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 19/120
	I0906 18:55:38.689552   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 20/120
	I0906 18:55:39.691396   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 21/120
	I0906 18:55:40.692932   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 22/120
	I0906 18:55:41.694393   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 23/120
	I0906 18:55:42.695962   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 24/120
	I0906 18:55:43.698093   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 25/120
	I0906 18:55:44.699791   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 26/120
	I0906 18:55:45.701180   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 27/120
	I0906 18:55:46.702718   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 28/120
	I0906 18:55:47.704296   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 29/120
	I0906 18:55:48.706204   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 30/120
	I0906 18:55:49.707708   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 31/120
	I0906 18:55:50.709307   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 32/120
	I0906 18:55:51.711444   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 33/120
	I0906 18:55:52.712960   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 34/120
	I0906 18:55:53.714434   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 35/120
	I0906 18:55:54.716087   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 36/120
	I0906 18:55:55.717590   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 37/120
	I0906 18:55:56.719396   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 38/120
	I0906 18:55:57.720681   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 39/120
	I0906 18:55:58.722641   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 40/120
	I0906 18:55:59.724046   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 41/120
	I0906 18:56:00.725276   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 42/120
	I0906 18:56:01.727251   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 43/120
	I0906 18:56:02.728502   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 44/120
	I0906 18:56:03.730289   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 45/120
	I0906 18:56:04.731653   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 46/120
	I0906 18:56:05.732908   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 47/120
	I0906 18:56:06.735229   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 48/120
	I0906 18:56:07.736708   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 49/120
	I0906 18:56:08.739030   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 50/120
	I0906 18:56:09.740241   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 51/120
	I0906 18:56:10.741536   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 52/120
	I0906 18:56:11.743221   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 53/120
	I0906 18:56:12.744612   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 54/120
	I0906 18:56:13.746599   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 55/120
	I0906 18:56:14.747915   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 56/120
	I0906 18:56:15.749627   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 57/120
	I0906 18:56:16.751275   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 58/120
	I0906 18:56:17.752849   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 59/120
	I0906 18:56:18.755377   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 60/120
	I0906 18:56:19.756647   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 61/120
	I0906 18:56:20.758121   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 62/120
	I0906 18:56:21.759696   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 63/120
	I0906 18:56:22.761029   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 64/120
	I0906 18:56:23.763102   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 65/120
	I0906 18:56:24.764648   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 66/120
	I0906 18:56:25.765989   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 67/120
	I0906 18:56:26.767220   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 68/120
	I0906 18:56:27.768827   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 69/120
	I0906 18:56:28.770827   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 70/120
	I0906 18:56:29.773157   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 71/120
	I0906 18:56:30.775613   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 72/120
	I0906 18:56:31.777213   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 73/120
	I0906 18:56:32.779658   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 74/120
	I0906 18:56:33.781354   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 75/120
	I0906 18:56:34.783314   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 76/120
	I0906 18:56:35.785433   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 77/120
	I0906 18:56:36.787332   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 78/120
	I0906 18:56:37.788616   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 79/120
	I0906 18:56:38.790497   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 80/120
	I0906 18:56:39.791863   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 81/120
	I0906 18:56:40.793256   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 82/120
	I0906 18:56:41.795313   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 83/120
	I0906 18:56:42.797348   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 84/120
	I0906 18:56:43.799420   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 85/120
	I0906 18:56:44.801062   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 86/120
	I0906 18:56:45.803355   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 87/120
	I0906 18:56:46.804776   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 88/120
	I0906 18:56:47.806310   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 89/120
	I0906 18:56:48.808532   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 90/120
	I0906 18:56:49.809849   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 91/120
	I0906 18:56:50.811158   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 92/120
	I0906 18:56:51.812414   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 93/120
	I0906 18:56:52.814615   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 94/120
	I0906 18:56:53.816424   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 95/120
	I0906 18:56:54.818247   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 96/120
	I0906 18:56:55.819761   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 97/120
	I0906 18:56:56.821165   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 98/120
	I0906 18:56:57.822426   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 99/120
	I0906 18:56:58.824515   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 100/120
	I0906 18:56:59.825857   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 101/120
	I0906 18:57:00.827281   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 102/120
	I0906 18:57:01.828560   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 103/120
	I0906 18:57:02.830067   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 104/120
	I0906 18:57:03.831906   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 105/120
	I0906 18:57:04.833393   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 106/120
	I0906 18:57:05.835448   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 107/120
	I0906 18:57:06.837493   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 108/120
	I0906 18:57:07.838756   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 109/120
	I0906 18:57:08.841027   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 110/120
	I0906 18:57:09.842277   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 111/120
	I0906 18:57:10.843697   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 112/120
	I0906 18:57:11.844921   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 113/120
	I0906 18:57:12.846285   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 114/120
	I0906 18:57:13.848520   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 115/120
	I0906 18:57:14.850836   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 116/120
	I0906 18:57:15.852017   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 117/120
	I0906 18:57:16.853923   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 118/120
	I0906 18:57:17.855254   28683 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 119/120
	I0906 18:57:18.855894   28683 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0906 18:57:18.856178   28683 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-313128 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
E0906 18:57:33.047418   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 3 (19.194083657s)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-313128-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:57:18.899315   29138 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:57:18.899411   29138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:18.899419   29138 out.go:358] Setting ErrFile to fd 2...
	I0906 18:57:18.899424   29138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:18.899628   29138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:57:18.899798   29138 out.go:352] Setting JSON to false
	I0906 18:57:18.899824   29138 mustload.go:65] Loading cluster: ha-313128
	I0906 18:57:18.899923   29138 notify.go:220] Checking for updates...
	I0906 18:57:18.900161   29138 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:57:18.900174   29138 status.go:255] checking status of ha-313128 ...
	I0906 18:57:18.900526   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:18.900570   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:18.925170   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46353
	I0906 18:57:18.925676   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:18.926276   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:18.926302   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:18.926695   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:18.926905   29138 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:57:18.928528   29138 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 18:57:18.928546   29138 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:57:18.928882   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:18.928927   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:18.944397   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0906 18:57:18.944887   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:18.945500   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:18.945523   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:18.945887   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:18.946102   29138 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:57:18.949194   29138 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:18.949665   29138 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:57:18.949692   29138 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:18.949830   29138 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:57:18.950142   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:18.950182   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:18.965013   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46097
	I0906 18:57:18.965433   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:18.965925   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:18.965944   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:18.966244   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:18.966461   29138 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:57:18.966657   29138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:18.966691   29138 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:57:18.969664   29138 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:18.970184   29138 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:57:18.970221   29138 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:18.970355   29138 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:57:18.970525   29138 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:57:18.970676   29138 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:57:18.970831   29138 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:57:19.058980   29138 ssh_runner.go:195] Run: systemctl --version
	I0906 18:57:19.066189   29138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:19.085205   29138 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:57:19.085240   29138 api_server.go:166] Checking apiserver status ...
	I0906 18:57:19.085272   29138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:57:19.102454   29138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup
	W0906 18:57:19.114111   29138 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:57:19.114162   29138 ssh_runner.go:195] Run: ls
	I0906 18:57:19.120121   29138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:57:19.126190   29138 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:57:19.126216   29138 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 18:57:19.126234   29138 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:19.126275   29138 status.go:255] checking status of ha-313128-m02 ...
	I0906 18:57:19.126609   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:19.126647   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:19.141273   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I0906 18:57:19.141658   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:19.142141   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:19.142161   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:19.142453   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:19.142671   29138 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:57:19.144439   29138 status.go:330] ha-313128-m02 host status = "Running" (err=<nil>)
	I0906 18:57:19.144456   29138 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:57:19.144811   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:19.144869   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:19.159780   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I0906 18:57:19.160309   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:19.160876   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:19.160903   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:19.161211   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:19.161397   29138 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:57:19.163975   29138 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:19.164357   29138 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:57:19.164387   29138 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:19.164550   29138 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:57:19.164965   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:19.165006   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:19.180000   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38493
	I0906 18:57:19.180426   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:19.180936   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:19.180955   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:19.181253   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:19.181449   29138 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:57:19.181615   29138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:19.181636   29138 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:57:19.184583   29138 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:19.185109   29138 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:57:19.185147   29138 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:19.185382   29138 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:57:19.185542   29138 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:57:19.185679   29138 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:57:19.185824   29138 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	W0906 18:57:37.673141   29138 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:57:37.673226   29138 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0906 18:57:37.673244   29138 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:37.673253   29138 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 18:57:37.673288   29138 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:37.673296   29138 status.go:255] checking status of ha-313128-m03 ...
	I0906 18:57:37.673589   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:37.673626   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:37.689437   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0906 18:57:37.689836   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:37.690300   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:37.690318   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:37.690636   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:37.690841   29138 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:57:37.692436   29138 status.go:330] ha-313128-m03 host status = "Running" (err=<nil>)
	I0906 18:57:37.692458   29138 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:57:37.692742   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:37.692774   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:37.707161   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I0906 18:57:37.707616   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:37.708077   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:37.708101   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:37.708428   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:37.708588   29138 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:57:37.711518   29138 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:37.711926   29138 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:57:37.711955   29138 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:37.712088   29138 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:57:37.712524   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:37.712573   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:37.727377   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43449
	I0906 18:57:37.727848   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:37.728383   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:37.728403   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:37.728765   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:37.729008   29138 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:57:37.729223   29138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:37.729248   29138 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:57:37.732101   29138 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:37.732517   29138 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:57:37.732544   29138 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:37.732643   29138 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:57:37.732833   29138 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:57:37.733016   29138 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:57:37.733152   29138 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:57:37.818885   29138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:37.837930   29138 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:57:37.837962   29138 api_server.go:166] Checking apiserver status ...
	I0906 18:57:37.838003   29138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:57:37.855165   29138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	W0906 18:57:37.873252   29138 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:57:37.873308   29138 ssh_runner.go:195] Run: ls
	I0906 18:57:37.878254   29138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:57:37.884740   29138 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:57:37.884767   29138 status.go:422] ha-313128-m03 apiserver status = Running (err=<nil>)
	I0906 18:57:37.884776   29138 status.go:257] ha-313128-m03 status: &{Name:ha-313128-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:37.884793   29138 status.go:255] checking status of ha-313128-m04 ...
	I0906 18:57:37.885154   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:37.885195   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:37.901686   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45065
	I0906 18:57:37.902125   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:37.902621   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:37.902653   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:37.902950   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:37.903159   29138 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:57:37.904767   29138 status.go:330] ha-313128-m04 host status = "Running" (err=<nil>)
	I0906 18:57:37.904783   29138 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:57:37.905106   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:37.905166   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:37.920720   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43073
	I0906 18:57:37.921127   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:37.921668   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:37.921730   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:37.922093   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:37.922305   29138 main.go:141] libmachine: (ha-313128-m04) Calling .GetIP
	I0906 18:57:37.925631   29138 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:37.926055   29138 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:57:37.926084   29138 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:37.926250   29138 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:57:37.926571   29138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:37.926611   29138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:37.941641   29138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0906 18:57:37.942129   29138 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:37.942563   29138 main.go:141] libmachine: Using API Version  1
	I0906 18:57:37.942582   29138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:37.942922   29138 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:37.943087   29138 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:57:37.943248   29138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:37.943263   29138 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:57:37.946260   29138 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:37.946587   29138 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:57:37.946616   29138 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:37.946727   29138 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:57:37.946902   29138 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:57:37.947059   29138 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:57:37.947206   29138 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:57:38.033561   29138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:38.049950   29138 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-313128 -n ha-313128
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-313128 logs -n 25: (1.421633202s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128:/home/docker/cp-test_ha-313128-m03_ha-313128.txt                       |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128 sudo cat                                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128.txt                                 |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m02:/home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m04 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp testdata/cp-test.txt                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128:/home/docker/cp-test_ha-313128-m04_ha-313128.txt                       |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128 sudo cat                                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128.txt                                 |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m02:/home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03:/home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m03 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-313128 node stop m02 -v=7                                                     | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:50:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:50:42.241342   24633 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:50:42.241614   24633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:42.241623   24633 out.go:358] Setting ErrFile to fd 2...
	I0906 18:50:42.241627   24633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:42.241844   24633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:50:42.242402   24633 out.go:352] Setting JSON to false
	I0906 18:50:42.243240   24633 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1991,"bootTime":1725646651,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:50:42.243295   24633 start.go:139] virtualization: kvm guest
	I0906 18:50:42.245178   24633 out.go:177] * [ha-313128] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:50:42.246461   24633 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:50:42.246466   24633 notify.go:220] Checking for updates...
	I0906 18:50:42.247673   24633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:50:42.249313   24633 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:50:42.250474   24633 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:50:42.251672   24633 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:50:42.252739   24633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:50:42.253949   24633 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:50:42.288794   24633 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 18:50:42.289936   24633 start.go:297] selected driver: kvm2
	I0906 18:50:42.289949   24633 start.go:901] validating driver "kvm2" against <nil>
	I0906 18:50:42.289962   24633 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:50:42.290679   24633 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:50:42.290744   24633 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 18:50:42.305815   24633 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 18:50:42.305868   24633 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:50:42.306084   24633 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:50:42.306137   24633 cni.go:84] Creating CNI manager for ""
	I0906 18:50:42.306149   24633 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0906 18:50:42.306154   24633 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 18:50:42.306207   24633 start.go:340] cluster config:
	{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0906 18:50:42.306307   24633 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:50:42.307955   24633 out.go:177] * Starting "ha-313128" primary control-plane node in "ha-313128" cluster
	I0906 18:50:42.309081   24633 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:50:42.309113   24633 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 18:50:42.309125   24633 cache.go:56] Caching tarball of preloaded images
	I0906 18:50:42.309203   24633 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 18:50:42.309216   24633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 18:50:42.309557   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:50:42.309582   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json: {Name:mk2b5aaa86bcacd8dc1788c104cd70b3467204ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:50:42.309744   24633 start.go:360] acquireMachinesLock for ha-313128: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:50:42.309777   24633 start.go:364] duration metric: took 18.419µs to acquireMachinesLock for "ha-313128"
	I0906 18:50:42.309804   24633 start.go:93] Provisioning new machine with config: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:defau
lt APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:50:42.309860   24633 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 18:50:42.311483   24633 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 18:50:42.311612   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:50:42.311656   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:50:42.325721   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I0906 18:50:42.326175   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:50:42.326691   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:50:42.326710   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:50:42.327026   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:50:42.327156   24633 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 18:50:42.327294   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:50:42.327414   24633 start.go:159] libmachine.API.Create for "ha-313128" (driver="kvm2")
	I0906 18:50:42.327441   24633 client.go:168] LocalClient.Create starting
	I0906 18:50:42.327469   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 18:50:42.327502   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:50:42.327523   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:50:42.327577   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 18:50:42.327595   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:50:42.327608   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:50:42.327627   24633 main.go:141] libmachine: Running pre-create checks...
	I0906 18:50:42.327635   24633 main.go:141] libmachine: (ha-313128) Calling .PreCreateCheck
	I0906 18:50:42.327947   24633 main.go:141] libmachine: (ha-313128) Calling .GetConfigRaw
	I0906 18:50:42.328330   24633 main.go:141] libmachine: Creating machine...
	I0906 18:50:42.328348   24633 main.go:141] libmachine: (ha-313128) Calling .Create
	I0906 18:50:42.328448   24633 main.go:141] libmachine: (ha-313128) Creating KVM machine...
	I0906 18:50:42.329656   24633 main.go:141] libmachine: (ha-313128) DBG | found existing default KVM network
	I0906 18:50:42.330288   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.330161   24656 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0906 18:50:42.330322   24633 main.go:141] libmachine: (ha-313128) DBG | created network xml: 
	I0906 18:50:42.330340   24633 main.go:141] libmachine: (ha-313128) DBG | <network>
	I0906 18:50:42.330350   24633 main.go:141] libmachine: (ha-313128) DBG |   <name>mk-ha-313128</name>
	I0906 18:50:42.330355   24633 main.go:141] libmachine: (ha-313128) DBG |   <dns enable='no'/>
	I0906 18:50:42.330360   24633 main.go:141] libmachine: (ha-313128) DBG |   
	I0906 18:50:42.330367   24633 main.go:141] libmachine: (ha-313128) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0906 18:50:42.330373   24633 main.go:141] libmachine: (ha-313128) DBG |     <dhcp>
	I0906 18:50:42.330381   24633 main.go:141] libmachine: (ha-313128) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0906 18:50:42.330387   24633 main.go:141] libmachine: (ha-313128) DBG |     </dhcp>
	I0906 18:50:42.330394   24633 main.go:141] libmachine: (ha-313128) DBG |   </ip>
	I0906 18:50:42.330399   24633 main.go:141] libmachine: (ha-313128) DBG |   
	I0906 18:50:42.330406   24633 main.go:141] libmachine: (ha-313128) DBG | </network>
	I0906 18:50:42.330412   24633 main.go:141] libmachine: (ha-313128) DBG | 
	I0906 18:50:42.335419   24633 main.go:141] libmachine: (ha-313128) DBG | trying to create private KVM network mk-ha-313128 192.168.39.0/24...
	I0906 18:50:42.399184   24633 main.go:141] libmachine: (ha-313128) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128 ...
	I0906 18:50:42.399215   24633 main.go:141] libmachine: (ha-313128) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:50:42.399226   24633 main.go:141] libmachine: (ha-313128) DBG | private KVM network mk-ha-313128 192.168.39.0/24 created
	I0906 18:50:42.399261   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.399132   24656 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:50:42.399285   24633 main.go:141] libmachine: (ha-313128) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:50:42.637821   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.637701   24656 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa...
	I0906 18:50:42.786449   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.786308   24656 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/ha-313128.rawdisk...
	I0906 18:50:42.786491   24633 main.go:141] libmachine: (ha-313128) DBG | Writing magic tar header
	I0906 18:50:42.786508   24633 main.go:141] libmachine: (ha-313128) DBG | Writing SSH key tar header
	I0906 18:50:42.786520   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.786456   24656 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128 ...
	I0906 18:50:42.786635   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128
	I0906 18:50:42.786668   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 18:50:42.786681   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128 (perms=drwx------)
	I0906 18:50:42.786712   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:50:42.786722   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 18:50:42.786736   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 18:50:42.786751   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:50:42.786761   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:50:42.786775   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 18:50:42.786787   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:50:42.786799   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:50:42.786808   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home
	I0906 18:50:42.786816   24633 main.go:141] libmachine: (ha-313128) DBG | Skipping /home - not owner
	I0906 18:50:42.786825   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:50:42.786831   24633 main.go:141] libmachine: (ha-313128) Creating domain...
	I0906 18:50:42.787754   24633 main.go:141] libmachine: (ha-313128) define libvirt domain using xml: 
	I0906 18:50:42.787766   24633 main.go:141] libmachine: (ha-313128) <domain type='kvm'>
	I0906 18:50:42.787772   24633 main.go:141] libmachine: (ha-313128)   <name>ha-313128</name>
	I0906 18:50:42.787782   24633 main.go:141] libmachine: (ha-313128)   <memory unit='MiB'>2200</memory>
	I0906 18:50:42.787791   24633 main.go:141] libmachine: (ha-313128)   <vcpu>2</vcpu>
	I0906 18:50:42.787814   24633 main.go:141] libmachine: (ha-313128)   <features>
	I0906 18:50:42.787827   24633 main.go:141] libmachine: (ha-313128)     <acpi/>
	I0906 18:50:42.787831   24633 main.go:141] libmachine: (ha-313128)     <apic/>
	I0906 18:50:42.787836   24633 main.go:141] libmachine: (ha-313128)     <pae/>
	I0906 18:50:42.787844   24633 main.go:141] libmachine: (ha-313128)     
	I0906 18:50:42.787850   24633 main.go:141] libmachine: (ha-313128)   </features>
	I0906 18:50:42.787857   24633 main.go:141] libmachine: (ha-313128)   <cpu mode='host-passthrough'>
	I0906 18:50:42.787865   24633 main.go:141] libmachine: (ha-313128)   
	I0906 18:50:42.787874   24633 main.go:141] libmachine: (ha-313128)   </cpu>
	I0906 18:50:42.787888   24633 main.go:141] libmachine: (ha-313128)   <os>
	I0906 18:50:42.787898   24633 main.go:141] libmachine: (ha-313128)     <type>hvm</type>
	I0906 18:50:42.787909   24633 main.go:141] libmachine: (ha-313128)     <boot dev='cdrom'/>
	I0906 18:50:42.787921   24633 main.go:141] libmachine: (ha-313128)     <boot dev='hd'/>
	I0906 18:50:42.787929   24633 main.go:141] libmachine: (ha-313128)     <bootmenu enable='no'/>
	I0906 18:50:42.787935   24633 main.go:141] libmachine: (ha-313128)   </os>
	I0906 18:50:42.787940   24633 main.go:141] libmachine: (ha-313128)   <devices>
	I0906 18:50:42.787947   24633 main.go:141] libmachine: (ha-313128)     <disk type='file' device='cdrom'>
	I0906 18:50:42.787955   24633 main.go:141] libmachine: (ha-313128)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/boot2docker.iso'/>
	I0906 18:50:42.787962   24633 main.go:141] libmachine: (ha-313128)       <target dev='hdc' bus='scsi'/>
	I0906 18:50:42.787967   24633 main.go:141] libmachine: (ha-313128)       <readonly/>
	I0906 18:50:42.787977   24633 main.go:141] libmachine: (ha-313128)     </disk>
	I0906 18:50:42.788009   24633 main.go:141] libmachine: (ha-313128)     <disk type='file' device='disk'>
	I0906 18:50:42.788034   24633 main.go:141] libmachine: (ha-313128)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:50:42.788050   24633 main.go:141] libmachine: (ha-313128)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/ha-313128.rawdisk'/>
	I0906 18:50:42.788060   24633 main.go:141] libmachine: (ha-313128)       <target dev='hda' bus='virtio'/>
	I0906 18:50:42.788073   24633 main.go:141] libmachine: (ha-313128)     </disk>
	I0906 18:50:42.788084   24633 main.go:141] libmachine: (ha-313128)     <interface type='network'>
	I0906 18:50:42.788096   24633 main.go:141] libmachine: (ha-313128)       <source network='mk-ha-313128'/>
	I0906 18:50:42.788111   24633 main.go:141] libmachine: (ha-313128)       <model type='virtio'/>
	I0906 18:50:42.788122   24633 main.go:141] libmachine: (ha-313128)     </interface>
	I0906 18:50:42.788132   24633 main.go:141] libmachine: (ha-313128)     <interface type='network'>
	I0906 18:50:42.788142   24633 main.go:141] libmachine: (ha-313128)       <source network='default'/>
	I0906 18:50:42.788151   24633 main.go:141] libmachine: (ha-313128)       <model type='virtio'/>
	I0906 18:50:42.788163   24633 main.go:141] libmachine: (ha-313128)     </interface>
	I0906 18:50:42.788186   24633 main.go:141] libmachine: (ha-313128)     <serial type='pty'>
	I0906 18:50:42.788201   24633 main.go:141] libmachine: (ha-313128)       <target port='0'/>
	I0906 18:50:42.788212   24633 main.go:141] libmachine: (ha-313128)     </serial>
	I0906 18:50:42.788219   24633 main.go:141] libmachine: (ha-313128)     <console type='pty'>
	I0906 18:50:42.788230   24633 main.go:141] libmachine: (ha-313128)       <target type='serial' port='0'/>
	I0906 18:50:42.788242   24633 main.go:141] libmachine: (ha-313128)     </console>
	I0906 18:50:42.788255   24633 main.go:141] libmachine: (ha-313128)     <rng model='virtio'>
	I0906 18:50:42.788267   24633 main.go:141] libmachine: (ha-313128)       <backend model='random'>/dev/random</backend>
	I0906 18:50:42.788281   24633 main.go:141] libmachine: (ha-313128)     </rng>
	I0906 18:50:42.788294   24633 main.go:141] libmachine: (ha-313128)     
	I0906 18:50:42.788304   24633 main.go:141] libmachine: (ha-313128)     
	I0906 18:50:42.788315   24633 main.go:141] libmachine: (ha-313128)   </devices>
	I0906 18:50:42.788322   24633 main.go:141] libmachine: (ha-313128) </domain>
	I0906 18:50:42.788340   24633 main.go:141] libmachine: (ha-313128) 
	I0906 18:50:42.792640   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:1a:9d:87 in network default
	I0906 18:50:42.793247   24633 main.go:141] libmachine: (ha-313128) Ensuring networks are active...
	I0906 18:50:42.793269   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:42.793922   24633 main.go:141] libmachine: (ha-313128) Ensuring network default is active
	I0906 18:50:42.794264   24633 main.go:141] libmachine: (ha-313128) Ensuring network mk-ha-313128 is active
	I0906 18:50:42.794846   24633 main.go:141] libmachine: (ha-313128) Getting domain xml...
	I0906 18:50:42.795607   24633 main.go:141] libmachine: (ha-313128) Creating domain...
	I0906 18:50:43.986213   24633 main.go:141] libmachine: (ha-313128) Waiting to get IP...
	I0906 18:50:43.986898   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:43.987226   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:43.987269   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:43.987214   24656 retry.go:31] will retry after 219.310914ms: waiting for machine to come up
	I0906 18:50:44.208650   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:44.209073   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:44.209112   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:44.209040   24656 retry.go:31] will retry after 263.652423ms: waiting for machine to come up
	I0906 18:50:44.474435   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:44.474934   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:44.474956   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:44.474885   24656 retry.go:31] will retry after 370.076871ms: waiting for machine to come up
	I0906 18:50:44.846380   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:44.846744   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:44.846768   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:44.846717   24656 retry.go:31] will retry after 435.12925ms: waiting for machine to come up
	I0906 18:50:45.283287   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:45.283672   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:45.283696   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:45.283635   24656 retry.go:31] will retry after 719.1692ms: waiting for machine to come up
	I0906 18:50:46.003981   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:46.004393   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:46.004421   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:46.004344   24656 retry.go:31] will retry after 582.927494ms: waiting for machine to come up
	I0906 18:50:46.589175   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:46.589589   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:46.589617   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:46.589541   24656 retry.go:31] will retry after 1.047400336s: waiting for machine to come up
	I0906 18:50:47.638869   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:47.639295   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:47.639322   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:47.639244   24656 retry.go:31] will retry after 959.975477ms: waiting for machine to come up
	I0906 18:50:48.600448   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:48.600911   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:48.600933   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:48.600845   24656 retry.go:31] will retry after 1.819892733s: waiting for machine to come up
	I0906 18:50:50.422074   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:50.422512   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:50.422535   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:50.422470   24656 retry.go:31] will retry after 2.317608626s: waiting for machine to come up
	I0906 18:50:52.741860   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:52.742278   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:52.742300   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:52.742246   24656 retry.go:31] will retry after 1.884163944s: waiting for machine to come up
	I0906 18:50:54.629204   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:54.629610   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:54.629631   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:54.629577   24656 retry.go:31] will retry after 3.296166546s: waiting for machine to come up
	I0906 18:50:57.927315   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:57.927722   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:57.927749   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:57.927670   24656 retry.go:31] will retry after 3.645758109s: waiting for machine to come up
	I0906 18:51:01.577712   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:01.578200   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:51:01.578229   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:51:01.578140   24656 retry.go:31] will retry after 4.942659137s: waiting for machine to come up
	I0906 18:51:06.525967   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.526312   24633 main.go:141] libmachine: (ha-313128) Found IP for machine: 192.168.39.70
	I0906 18:51:06.526338   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has current primary IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.526348   24633 main.go:141] libmachine: (ha-313128) Reserving static IP address...
	I0906 18:51:06.526675   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find host DHCP lease matching {name: "ha-313128", mac: "52:54:00:e1:5d:d2", ip: "192.168.39.70"} in network mk-ha-313128
	I0906 18:51:06.597574   24633 main.go:141] libmachine: (ha-313128) DBG | Getting to WaitForSSH function...
	I0906 18:51:06.597619   24633 main.go:141] libmachine: (ha-313128) Reserved static IP address: 192.168.39.70
	I0906 18:51:06.597635   24633 main.go:141] libmachine: (ha-313128) Waiting for SSH to be available...
	I0906 18:51:06.600248   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.600651   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:06.600679   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.600936   24633 main.go:141] libmachine: (ha-313128) DBG | Using SSH client type: external
	I0906 18:51:06.600961   24633 main.go:141] libmachine: (ha-313128) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa (-rw-------)
	I0906 18:51:06.600988   24633 main.go:141] libmachine: (ha-313128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:51:06.601002   24633 main.go:141] libmachine: (ha-313128) DBG | About to run SSH command:
	I0906 18:51:06.601015   24633 main.go:141] libmachine: (ha-313128) DBG | exit 0
	I0906 18:51:06.725154   24633 main.go:141] libmachine: (ha-313128) DBG | SSH cmd err, output: <nil>: 
	I0906 18:51:06.725459   24633 main.go:141] libmachine: (ha-313128) KVM machine creation complete!
	I0906 18:51:06.725772   24633 main.go:141] libmachine: (ha-313128) Calling .GetConfigRaw
	I0906 18:51:06.726286   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:06.726476   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:06.726637   24633 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:51:06.726652   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:06.727819   24633 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:51:06.727834   24633 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:51:06.727842   24633 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:51:06.727848   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:06.730591   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.730983   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:06.731016   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.731117   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:06.731299   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.731441   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.731585   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:06.731762   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:06.731973   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:06.731985   24633 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:51:06.836292   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:51:06.836313   24633 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:51:06.836320   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:06.838996   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.839387   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:06.839420   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.839518   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:06.839727   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.839896   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.840053   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:06.840220   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:06.840381   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:06.840393   24633 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:51:06.949833   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:51:06.949949   24633 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:51:06.949961   24633 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:51:06.949969   24633 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 18:51:06.950241   24633 buildroot.go:166] provisioning hostname "ha-313128"
	I0906 18:51:06.950264   24633 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 18:51:06.950485   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:06.952910   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.953295   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:06.953317   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.953488   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:06.953693   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.953840   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.954001   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:06.954152   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:06.954332   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:06.954344   24633 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128 && echo "ha-313128" | sudo tee /etc/hostname
	I0906 18:51:07.075964   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 18:51:07.076000   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.078750   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.079086   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.079113   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.079316   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.079484   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.079673   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.079798   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.079962   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:07.080126   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:07.080141   24633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:51:07.193921   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:51:07.193958   24633 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 18:51:07.193989   24633 buildroot.go:174] setting up certificates
	I0906 18:51:07.193999   24633 provision.go:84] configureAuth start
	I0906 18:51:07.194011   24633 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 18:51:07.194348   24633 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:51:07.196926   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.197260   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.197286   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.197450   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.199422   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.199698   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.199717   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.199838   24633 provision.go:143] copyHostCerts
	I0906 18:51:07.199869   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:51:07.199919   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 18:51:07.199937   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:51:07.200019   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 18:51:07.200174   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:51:07.200203   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 18:51:07.200213   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:51:07.200255   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 18:51:07.200340   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:51:07.200363   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 18:51:07.200372   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:51:07.200407   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 18:51:07.200497   24633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128 san=[127.0.0.1 192.168.39.70 ha-313128 localhost minikube]
	I0906 18:51:07.392285   24633 provision.go:177] copyRemoteCerts
	I0906 18:51:07.392342   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:51:07.392362   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.394986   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.395297   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.395325   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.395525   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.395685   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.395819   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.395921   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:07.479623   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 18:51:07.479691   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 18:51:07.505265   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 18:51:07.505334   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:51:07.529872   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 18:51:07.529933   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0906 18:51:07.553374   24633 provision.go:87] duration metric: took 359.361307ms to configureAuth
	I0906 18:51:07.553397   24633 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:51:07.553562   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:07.553623   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.556156   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.556501   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.556527   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.556676   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.556912   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.557048   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.557155   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.557294   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:07.557492   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:07.557512   24633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 18:51:07.787198   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 18:51:07.787231   24633 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:51:07.787242   24633 main.go:141] libmachine: (ha-313128) Calling .GetURL
	I0906 18:51:07.788669   24633 main.go:141] libmachine: (ha-313128) DBG | Using libvirt version 6000000
	I0906 18:51:07.790719   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.791027   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.791057   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.791182   24633 main.go:141] libmachine: Docker is up and running!
	I0906 18:51:07.791202   24633 main.go:141] libmachine: Reticulating splines...
	I0906 18:51:07.791210   24633 client.go:171] duration metric: took 25.463760113s to LocalClient.Create
	I0906 18:51:07.791234   24633 start.go:167] duration metric: took 25.463820367s to libmachine.API.Create "ha-313128"
	I0906 18:51:07.791246   24633 start.go:293] postStartSetup for "ha-313128" (driver="kvm2")
	I0906 18:51:07.791261   24633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:51:07.791279   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:07.791515   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:51:07.791537   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.793579   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.793894   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.793923   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.794060   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.794226   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.794368   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.794495   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:07.880189   24633 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:51:07.885048   24633 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:51:07.885072   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 18:51:07.885149   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 18:51:07.885250   24633 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 18:51:07.885262   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 18:51:07.885376   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 18:51:07.895441   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:51:07.923397   24633 start.go:296] duration metric: took 132.136955ms for postStartSetup
	I0906 18:51:07.923473   24633 main.go:141] libmachine: (ha-313128) Calling .GetConfigRaw
	I0906 18:51:07.924092   24633 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:51:07.926375   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.926621   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.926640   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.926875   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:51:07.927092   24633 start.go:128] duration metric: took 25.617222048s to createHost
	I0906 18:51:07.927113   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.929244   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.929555   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.929570   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.929747   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.929945   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.930104   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.930251   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.930418   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:07.930613   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:07.930632   24633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:51:08.038105   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725648668.016751149
	
	I0906 18:51:08.038127   24633 fix.go:216] guest clock: 1725648668.016751149
	I0906 18:51:08.038134   24633 fix.go:229] Guest: 2024-09-06 18:51:08.016751149 +0000 UTC Remote: 2024-09-06 18:51:07.927102611 +0000 UTC m=+25.719332215 (delta=89.648538ms)
	I0906 18:51:08.038163   24633 fix.go:200] guest clock delta is within tolerance: 89.648538ms
	I0906 18:51:08.038171   24633 start.go:83] releasing machines lock for "ha-313128", held for 25.728376749s
	I0906 18:51:08.038193   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:08.038444   24633 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:51:08.041444   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.041798   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:08.041826   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.042067   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:08.042545   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:08.042725   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:08.042811   24633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:51:08.042861   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:08.042969   24633 ssh_runner.go:195] Run: cat /version.json
	I0906 18:51:08.043011   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:08.045414   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.045687   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.045772   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:08.045801   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.045943   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:08.046117   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:08.046150   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:08.046174   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.046255   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:08.046331   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:08.046388   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:08.046449   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:08.046575   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:08.046710   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:08.130810   24633 ssh_runner.go:195] Run: systemctl --version
	I0906 18:51:08.154651   24633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 18:51:08.313672   24633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:51:08.319900   24633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:51:08.320001   24633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:51:08.337715   24633 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:51:08.337741   24633 start.go:495] detecting cgroup driver to use...
	I0906 18:51:08.337820   24633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:51:08.356242   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:51:08.371685   24633 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:51:08.371740   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:51:08.387728   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:51:08.402690   24633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:51:08.531270   24633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:51:08.703601   24633 docker.go:233] disabling docker service ...
	I0906 18:51:08.703668   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:51:08.718740   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:51:08.731543   24633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:51:08.865160   24633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:51:08.995934   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:51:09.010476   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:51:09.030226   24633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 18:51:09.030288   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.040653   24633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 18:51:09.040759   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.051481   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.061652   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.072907   24633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:51:09.083460   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.093354   24633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.110243   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.120642   24633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:51:09.129843   24633 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 18:51:09.129895   24633 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 18:51:09.142908   24633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:51:09.152738   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:51:09.277726   24633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 18:51:09.381806   24633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 18:51:09.381889   24633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 18:51:09.387328   24633 start.go:563] Will wait 60s for crictl version
	I0906 18:51:09.387386   24633 ssh_runner.go:195] Run: which crictl
	I0906 18:51:09.391304   24633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:51:09.431494   24633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 18:51:09.431568   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:51:09.459195   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:51:09.490550   24633 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 18:51:09.491778   24633 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:51:09.494246   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:09.494523   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:09.494552   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:09.494788   24633 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:51:09.498999   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:51:09.512390   24633 kubeadm.go:883] updating cluster {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 18:51:09.512493   24633 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:51:09.512534   24633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:51:09.544646   24633 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 18:51:09.544722   24633 ssh_runner.go:195] Run: which lz4
	I0906 18:51:09.548564   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0906 18:51:09.548652   24633 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 18:51:09.552604   24633 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 18:51:09.552630   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 18:51:10.933093   24633 crio.go:462] duration metric: took 1.384461239s to copy over tarball
	I0906 18:51:10.933167   24633 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 18:51:12.961238   24633 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.028040223s)
	I0906 18:51:12.961266   24633 crio.go:469] duration metric: took 2.028146469s to extract the tarball
	I0906 18:51:12.961275   24633 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 18:51:12.998311   24633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:51:13.045521   24633 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 18:51:13.045548   24633 cache_images.go:84] Images are preloaded, skipping loading
	I0906 18:51:13.045558   24633 kubeadm.go:934] updating node { 192.168.39.70 8443 v1.31.0 crio true true} ...
	I0906 18:51:13.045681   24633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:51:13.045804   24633 ssh_runner.go:195] Run: crio config
	I0906 18:51:13.094877   24633 cni.go:84] Creating CNI manager for ""
	I0906 18:51:13.094895   24633 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0906 18:51:13.094910   24633 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 18:51:13.094932   24633 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-313128 NodeName:ha-313128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 18:51:13.095060   24633 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-313128"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 18:51:13.095095   24633 kube-vip.go:115] generating kube-vip config ...
	I0906 18:51:13.095137   24633 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 18:51:13.117215   24633 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 18:51:13.117347   24633 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0906 18:51:13.117417   24633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:51:13.133450   24633 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 18:51:13.133529   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 18:51:13.143093   24633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 18:51:13.159866   24633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:51:13.175754   24633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0906 18:51:13.192134   24633 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0906 18:51:13.208621   24633 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 18:51:13.212459   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:51:13.224981   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:51:13.349241   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:51:13.367120   24633 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.70
	I0906 18:51:13.367144   24633 certs.go:194] generating shared ca certs ...
	I0906 18:51:13.367163   24633 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.367343   24633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 18:51:13.367415   24633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 18:51:13.367435   24633 certs.go:256] generating profile certs ...
	I0906 18:51:13.367515   24633 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 18:51:13.367534   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt with IP's: []
	I0906 18:51:13.666007   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt ...
	I0906 18:51:13.666050   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt: {Name:mkae10c4a64978657f91d36b765edf2f72d6b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.666247   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key ...
	I0906 18:51:13.666263   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key: {Name:mk49f39f518303d15b2fb4f8a39da575a917b087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.666354   24633 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.deddd12e
	I0906 18:51:13.666371   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.deddd12e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.254]
	I0906 18:51:13.920406   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.deddd12e ...
	I0906 18:51:13.920433   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.deddd12e: {Name:mk1fa2ba1c8b6fdd0c2c1b723647f82406e8dba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.920583   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.deddd12e ...
	I0906 18:51:13.920595   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.deddd12e: {Name:mk52bb4d4b7d02fab0ab5d4beac0a76ea18ed743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.920661   24633 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.deddd12e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 18:51:13.920756   24633 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.deddd12e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 18:51:13.920815   24633 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 18:51:13.920830   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt with IP's: []
	I0906 18:51:14.002856   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt ...
	I0906 18:51:14.002883   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt: {Name:mk2700e95bb8cfbf5bacfb518b6bf12523e49fbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:14.003026   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key ...
	I0906 18:51:14.003037   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key: {Name:mk668e5ba0da1ad43715dba8fcdf30dc055390cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:14.003116   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 18:51:14.003132   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 18:51:14.003143   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 18:51:14.003156   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 18:51:14.003168   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 18:51:14.003180   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 18:51:14.003192   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 18:51:14.003203   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 18:51:14.003269   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 18:51:14.003305   24633 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 18:51:14.003314   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:51:14.003345   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:51:14.003371   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:51:14.003392   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 18:51:14.003429   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:51:14.003454   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:14.003472   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 18:51:14.003485   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 18:51:14.004036   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:51:14.029711   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 18:51:14.052823   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:51:14.075993   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:51:14.099286   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 18:51:14.125504   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 18:51:14.150019   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:51:14.175178   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:51:14.208805   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:51:14.232695   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 18:51:14.257263   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 18:51:14.281295   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 18:51:14.298392   24633 ssh_runner.go:195] Run: openssl version
	I0906 18:51:14.304447   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:51:14.317138   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:14.322188   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:14.322250   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:14.328420   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:51:14.340736   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 18:51:14.352636   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 18:51:14.357230   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 18:51:14.357297   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 18:51:14.363056   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 18:51:14.375559   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 18:51:14.387857   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 18:51:14.392947   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 18:51:14.393003   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 18:51:14.398952   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 18:51:14.412232   24633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:51:14.416575   24633 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:51:14.416647   24633 kubeadm.go:392] StartCluster: {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:51:14.416759   24633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 18:51:14.416851   24633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 18:51:14.469476   24633 cri.go:89] found id: ""
	I0906 18:51:14.469549   24633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 18:51:14.482583   24633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 18:51:14.493642   24633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 18:51:14.505454   24633 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 18:51:14.505475   24633 kubeadm.go:157] found existing configuration files:
	
	I0906 18:51:14.505526   24633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 18:51:14.515659   24633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 18:51:14.515720   24633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 18:51:14.524992   24633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 18:51:14.534185   24633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 18:51:14.534243   24633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 18:51:14.544106   24633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 18:51:14.553426   24633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 18:51:14.553490   24633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 18:51:14.563381   24633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 18:51:14.573166   24633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 18:51:14.573231   24633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 18:51:14.582897   24633 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 18:51:14.689370   24633 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 18:51:14.689449   24633 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 18:51:14.797473   24633 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 18:51:14.797608   24633 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 18:51:14.797720   24633 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 18:51:14.807533   24633 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 18:51:14.844917   24633 out.go:235]   - Generating certificates and keys ...
	I0906 18:51:14.845072   24633 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 18:51:14.845162   24633 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 18:51:15.027267   24633 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 18:51:15.311688   24633 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 18:51:15.533807   24633 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 18:51:15.655687   24633 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 18:51:15.914716   24633 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 18:51:15.914964   24633 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-313128 localhost] and IPs [192.168.39.70 127.0.0.1 ::1]
	I0906 18:51:16.269557   24633 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 18:51:16.269748   24633 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-313128 localhost] and IPs [192.168.39.70 127.0.0.1 ::1]
	I0906 18:51:16.524685   24633 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 18:51:16.650845   24633 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 18:51:16.847630   24633 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 18:51:16.847904   24633 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 18:51:17.007883   24633 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 18:51:17.138574   24633 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 18:51:17.419167   24633 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 18:51:17.616983   24633 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 18:51:17.720800   24633 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 18:51:17.721483   24633 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 18:51:17.726904   24633 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 18:51:17.728530   24633 out.go:235]   - Booting up control plane ...
	I0906 18:51:17.728632   24633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 18:51:17.728721   24633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 18:51:17.729028   24633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 18:51:17.746057   24633 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 18:51:17.755129   24633 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 18:51:17.755253   24633 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 18:51:17.907543   24633 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 18:51:17.907667   24633 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 18:51:18.408740   24633 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.62305ms
	I0906 18:51:18.408831   24633 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 18:51:24.434725   24633 kubeadm.go:310] [api-check] The API server is healthy after 6.026907054s
	I0906 18:51:24.446291   24633 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 18:51:24.468363   24633 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 18:51:25.007118   24633 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 18:51:25.007301   24633 kubeadm.go:310] [mark-control-plane] Marking the node ha-313128 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 18:51:25.023020   24633 kubeadm.go:310] [bootstrap-token] Using token: xmh4ax.y6lhpiqw6s4v24x2
	I0906 18:51:25.024167   24633 out.go:235]   - Configuring RBAC rules ...
	I0906 18:51:25.024318   24633 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 18:51:25.031086   24633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 18:51:25.042621   24633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 18:51:25.047120   24633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 18:51:25.051473   24633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 18:51:25.058411   24633 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 18:51:25.073097   24633 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 18:51:25.321353   24633 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 18:51:25.842136   24633 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 18:51:25.843034   24633 kubeadm.go:310] 
	I0906 18:51:25.843096   24633 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 18:51:25.843124   24633 kubeadm.go:310] 
	I0906 18:51:25.843227   24633 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 18:51:25.843241   24633 kubeadm.go:310] 
	I0906 18:51:25.843276   24633 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 18:51:25.843338   24633 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 18:51:25.843402   24633 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 18:51:25.843415   24633 kubeadm.go:310] 
	I0906 18:51:25.843467   24633 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 18:51:25.843477   24633 kubeadm.go:310] 
	I0906 18:51:25.843536   24633 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 18:51:25.843560   24633 kubeadm.go:310] 
	I0906 18:51:25.843646   24633 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 18:51:25.843753   24633 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 18:51:25.843839   24633 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 18:51:25.843848   24633 kubeadm.go:310] 
	I0906 18:51:25.843949   24633 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 18:51:25.844050   24633 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 18:51:25.844074   24633 kubeadm.go:310] 
	I0906 18:51:25.844200   24633 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xmh4ax.y6lhpiqw6s4v24x2 \
	I0906 18:51:25.844323   24633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 18:51:25.844354   24633 kubeadm.go:310] 	--control-plane 
	I0906 18:51:25.844363   24633 kubeadm.go:310] 
	I0906 18:51:25.844466   24633 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 18:51:25.844477   24633 kubeadm.go:310] 
	I0906 18:51:25.844580   24633 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xmh4ax.y6lhpiqw6s4v24x2 \
	I0906 18:51:25.844727   24633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 18:51:25.845538   24633 kubeadm.go:310] W0906 18:51:14.671355     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:51:25.845883   24633 kubeadm.go:310] W0906 18:51:14.672130     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:51:25.846046   24633 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 18:51:25.846079   24633 cni.go:84] Creating CNI manager for ""
	I0906 18:51:25.846092   24633 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0906 18:51:25.848400   24633 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 18:51:25.849478   24633 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 18:51:25.856686   24633 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0906 18:51:25.856705   24633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0906 18:51:25.885689   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 18:51:26.237198   24633 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 18:51:26.237259   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:26.237284   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-313128 minikube.k8s.io/updated_at=2024_09_06T18_51_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=ha-313128 minikube.k8s.io/primary=true
	I0906 18:51:26.382196   24633 ops.go:34] apiserver oom_adj: -16
	I0906 18:51:26.382349   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:26.882958   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:27.383065   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:27.882971   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:28.382740   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:28.883392   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:29.382768   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:29.503653   24633 kubeadm.go:1113] duration metric: took 3.266449086s to wait for elevateKubeSystemPrivileges
	I0906 18:51:29.503690   24633 kubeadm.go:394] duration metric: took 15.087047227s to StartCluster
	I0906 18:51:29.503707   24633 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:29.503798   24633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:51:29.504429   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:29.504705   24633 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:51:29.504727   24633 start.go:241] waiting for startup goroutines ...
	I0906 18:51:29.504725   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 18:51:29.504740   24633 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 18:51:29.504806   24633 addons.go:69] Setting storage-provisioner=true in profile "ha-313128"
	I0906 18:51:29.504826   24633 addons.go:69] Setting default-storageclass=true in profile "ha-313128"
	I0906 18:51:29.504887   24633 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-313128"
	I0906 18:51:29.504835   24633 addons.go:234] Setting addon storage-provisioner=true in "ha-313128"
	I0906 18:51:29.504973   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:29.504982   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:51:29.505365   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.505367   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.505413   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.505482   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.521255   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I0906 18:51:29.521305   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0906 18:51:29.521798   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.521805   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.522305   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.522326   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.522450   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.522466   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.522684   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.522816   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.522985   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:29.523214   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.523252   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.525805   24633 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:51:29.526131   24633 kapi.go:59] client config for ha-313128: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt", KeyFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key", CAFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 18:51:29.526682   24633 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 18:51:29.526931   24633 addons.go:234] Setting addon default-storageclass=true in "ha-313128"
	I0906 18:51:29.526969   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:51:29.527324   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.527352   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.539024   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0906 18:51:29.539465   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.539994   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.540020   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.540341   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.540550   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:29.542534   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:29.542690   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0906 18:51:29.543008   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.543404   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.543426   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.543779   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.544220   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.544255   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.544434   24633 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 18:51:29.545561   24633 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:51:29.545581   24633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 18:51:29.545600   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:29.548949   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:29.549378   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:29.549398   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:29.549581   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:29.549767   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:29.549924   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:29.550085   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:29.559362   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0906 18:51:29.559803   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.560281   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.560305   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.560567   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.560736   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:29.562285   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:29.562497   24633 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 18:51:29.562513   24633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 18:51:29.562526   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:29.564874   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:29.565298   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:29.565326   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:29.565482   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:29.565661   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:29.565799   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:29.565951   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:29.636220   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 18:51:29.682123   24633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:51:29.697841   24633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:51:30.246142   24633 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 18:51:30.545152   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.545180   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.545152   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.545250   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.545507   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.545526   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.545536   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.545544   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.545564   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.545583   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.545596   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.545606   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.545834   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.545850   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.545848   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.545865   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.545912   24633 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 18:51:30.545932   24633 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 18:51:30.546044   24633 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0906 18:51:30.546058   24633 round_trippers.go:469] Request Headers:
	I0906 18:51:30.546068   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:51:30.546078   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:51:30.568360   24633 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0906 18:51:30.569153   24633 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0906 18:51:30.569169   24633 round_trippers.go:469] Request Headers:
	I0906 18:51:30.569177   24633 round_trippers.go:473]     Content-Type: application/json
	I0906 18:51:30.569182   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:51:30.569186   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:51:30.577205   24633 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0906 18:51:30.577354   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.577370   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.577655   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.577666   24633 main.go:141] libmachine: (ha-313128) DBG | Closing plugin on server side
	I0906 18:51:30.577675   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.579205   24633 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 18:51:30.580206   24633 addons.go:510] duration metric: took 1.075470312s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0906 18:51:30.580235   24633 start.go:246] waiting for cluster config update ...
	I0906 18:51:30.580249   24633 start.go:255] writing updated cluster config ...
	I0906 18:51:30.581657   24633 out.go:201] 
	I0906 18:51:30.582779   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:30.582837   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:51:30.584221   24633 out.go:177] * Starting "ha-313128-m02" control-plane node in "ha-313128" cluster
	I0906 18:51:30.585121   24633 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:51:30.585141   24633 cache.go:56] Caching tarball of preloaded images
	I0906 18:51:30.585214   24633 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 18:51:30.585225   24633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 18:51:30.585293   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:51:30.585489   24633 start.go:360] acquireMachinesLock for ha-313128-m02: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:51:30.585536   24633 start.go:364] duration metric: took 23.513µs to acquireMachinesLock for "ha-313128-m02"
	I0906 18:51:30.585560   24633 start.go:93] Provisioning new machine with config: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:51:30.585620   24633 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0906 18:51:30.586903   24633 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 18:51:30.586986   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:30.587016   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:30.601355   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0906 18:51:30.601697   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:30.602152   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:30.602171   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:30.602432   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:30.602646   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetMachineName
	I0906 18:51:30.602806   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:30.602963   24633 start.go:159] libmachine.API.Create for "ha-313128" (driver="kvm2")
	I0906 18:51:30.602985   24633 client.go:168] LocalClient.Create starting
	I0906 18:51:30.603023   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 18:51:30.603060   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:51:30.603080   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:51:30.603143   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 18:51:30.603170   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:51:30.603183   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:51:30.603207   24633 main.go:141] libmachine: Running pre-create checks...
	I0906 18:51:30.603219   24633 main.go:141] libmachine: (ha-313128-m02) Calling .PreCreateCheck
	I0906 18:51:30.603399   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetConfigRaw
	I0906 18:51:30.603766   24633 main.go:141] libmachine: Creating machine...
	I0906 18:51:30.603784   24633 main.go:141] libmachine: (ha-313128-m02) Calling .Create
	I0906 18:51:30.603911   24633 main.go:141] libmachine: (ha-313128-m02) Creating KVM machine...
	I0906 18:51:30.605134   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found existing default KVM network
	I0906 18:51:30.605228   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found existing private KVM network mk-ha-313128
	I0906 18:51:30.605381   24633 main.go:141] libmachine: (ha-313128-m02) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02 ...
	I0906 18:51:30.605407   24633 main.go:141] libmachine: (ha-313128-m02) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:51:30.605447   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:30.605351   25019 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:51:30.605536   24633 main.go:141] libmachine: (ha-313128-m02) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:51:30.830840   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:30.830729   25019 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa...
	I0906 18:51:31.129668   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:31.129563   25019 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/ha-313128-m02.rawdisk...
	I0906 18:51:31.129699   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Writing magic tar header
	I0906 18:51:31.129714   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Writing SSH key tar header
	I0906 18:51:31.129722   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:31.129672   25019 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02 ...
	I0906 18:51:31.129811   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02
	I0906 18:51:31.129849   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02 (perms=drwx------)
	I0906 18:51:31.129864   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 18:51:31.129875   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:51:31.129891   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:51:31.129901   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 18:51:31.129911   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 18:51:31.129929   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:51:31.129942   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:51:31.129956   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 18:51:31.129970   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:51:31.129981   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home
	I0906 18:51:31.129991   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:51:31.130005   24633 main.go:141] libmachine: (ha-313128-m02) Creating domain...
	I0906 18:51:31.130018   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Skipping /home - not owner
	I0906 18:51:31.131009   24633 main.go:141] libmachine: (ha-313128-m02) define libvirt domain using xml: 
	I0906 18:51:31.131029   24633 main.go:141] libmachine: (ha-313128-m02) <domain type='kvm'>
	I0906 18:51:31.131039   24633 main.go:141] libmachine: (ha-313128-m02)   <name>ha-313128-m02</name>
	I0906 18:51:31.131047   24633 main.go:141] libmachine: (ha-313128-m02)   <memory unit='MiB'>2200</memory>
	I0906 18:51:31.131056   24633 main.go:141] libmachine: (ha-313128-m02)   <vcpu>2</vcpu>
	I0906 18:51:31.131067   24633 main.go:141] libmachine: (ha-313128-m02)   <features>
	I0906 18:51:31.131077   24633 main.go:141] libmachine: (ha-313128-m02)     <acpi/>
	I0906 18:51:31.131087   24633 main.go:141] libmachine: (ha-313128-m02)     <apic/>
	I0906 18:51:31.131096   24633 main.go:141] libmachine: (ha-313128-m02)     <pae/>
	I0906 18:51:31.131107   24633 main.go:141] libmachine: (ha-313128-m02)     
	I0906 18:51:31.131117   24633 main.go:141] libmachine: (ha-313128-m02)   </features>
	I0906 18:51:31.131130   24633 main.go:141] libmachine: (ha-313128-m02)   <cpu mode='host-passthrough'>
	I0906 18:51:31.131142   24633 main.go:141] libmachine: (ha-313128-m02)   
	I0906 18:51:31.131152   24633 main.go:141] libmachine: (ha-313128-m02)   </cpu>
	I0906 18:51:31.131169   24633 main.go:141] libmachine: (ha-313128-m02)   <os>
	I0906 18:51:31.131178   24633 main.go:141] libmachine: (ha-313128-m02)     <type>hvm</type>
	I0906 18:51:31.131188   24633 main.go:141] libmachine: (ha-313128-m02)     <boot dev='cdrom'/>
	I0906 18:51:31.131199   24633 main.go:141] libmachine: (ha-313128-m02)     <boot dev='hd'/>
	I0906 18:51:31.131212   24633 main.go:141] libmachine: (ha-313128-m02)     <bootmenu enable='no'/>
	I0906 18:51:31.131222   24633 main.go:141] libmachine: (ha-313128-m02)   </os>
	I0906 18:51:31.131233   24633 main.go:141] libmachine: (ha-313128-m02)   <devices>
	I0906 18:51:31.131245   24633 main.go:141] libmachine: (ha-313128-m02)     <disk type='file' device='cdrom'>
	I0906 18:51:31.131262   24633 main.go:141] libmachine: (ha-313128-m02)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/boot2docker.iso'/>
	I0906 18:51:31.131273   24633 main.go:141] libmachine: (ha-313128-m02)       <target dev='hdc' bus='scsi'/>
	I0906 18:51:31.131284   24633 main.go:141] libmachine: (ha-313128-m02)       <readonly/>
	I0906 18:51:31.131300   24633 main.go:141] libmachine: (ha-313128-m02)     </disk>
	I0906 18:51:31.131314   24633 main.go:141] libmachine: (ha-313128-m02)     <disk type='file' device='disk'>
	I0906 18:51:31.131327   24633 main.go:141] libmachine: (ha-313128-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:51:31.131348   24633 main.go:141] libmachine: (ha-313128-m02)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/ha-313128-m02.rawdisk'/>
	I0906 18:51:31.131360   24633 main.go:141] libmachine: (ha-313128-m02)       <target dev='hda' bus='virtio'/>
	I0906 18:51:31.131372   24633 main.go:141] libmachine: (ha-313128-m02)     </disk>
	I0906 18:51:31.131383   24633 main.go:141] libmachine: (ha-313128-m02)     <interface type='network'>
	I0906 18:51:31.131393   24633 main.go:141] libmachine: (ha-313128-m02)       <source network='mk-ha-313128'/>
	I0906 18:51:31.131404   24633 main.go:141] libmachine: (ha-313128-m02)       <model type='virtio'/>
	I0906 18:51:31.131414   24633 main.go:141] libmachine: (ha-313128-m02)     </interface>
	I0906 18:51:31.131425   24633 main.go:141] libmachine: (ha-313128-m02)     <interface type='network'>
	I0906 18:51:31.131436   24633 main.go:141] libmachine: (ha-313128-m02)       <source network='default'/>
	I0906 18:51:31.131446   24633 main.go:141] libmachine: (ha-313128-m02)       <model type='virtio'/>
	I0906 18:51:31.131458   24633 main.go:141] libmachine: (ha-313128-m02)     </interface>
	I0906 18:51:31.131470   24633 main.go:141] libmachine: (ha-313128-m02)     <serial type='pty'>
	I0906 18:51:31.131482   24633 main.go:141] libmachine: (ha-313128-m02)       <target port='0'/>
	I0906 18:51:31.131490   24633 main.go:141] libmachine: (ha-313128-m02)     </serial>
	I0906 18:51:31.131503   24633 main.go:141] libmachine: (ha-313128-m02)     <console type='pty'>
	I0906 18:51:31.131514   24633 main.go:141] libmachine: (ha-313128-m02)       <target type='serial' port='0'/>
	I0906 18:51:31.131526   24633 main.go:141] libmachine: (ha-313128-m02)     </console>
	I0906 18:51:31.131538   24633 main.go:141] libmachine: (ha-313128-m02)     <rng model='virtio'>
	I0906 18:51:31.131550   24633 main.go:141] libmachine: (ha-313128-m02)       <backend model='random'>/dev/random</backend>
	I0906 18:51:31.131560   24633 main.go:141] libmachine: (ha-313128-m02)     </rng>
	I0906 18:51:31.131571   24633 main.go:141] libmachine: (ha-313128-m02)     
	I0906 18:51:31.131581   24633 main.go:141] libmachine: (ha-313128-m02)     
	I0906 18:51:31.131590   24633 main.go:141] libmachine: (ha-313128-m02)   </devices>
	I0906 18:51:31.131600   24633 main.go:141] libmachine: (ha-313128-m02) </domain>
	I0906 18:51:31.131613   24633 main.go:141] libmachine: (ha-313128-m02) 
	I0906 18:51:31.137934   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:d1:48:14 in network default
	I0906 18:51:31.138539   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:31.138557   24633 main.go:141] libmachine: (ha-313128-m02) Ensuring networks are active...
	I0906 18:51:31.139314   24633 main.go:141] libmachine: (ha-313128-m02) Ensuring network default is active
	I0906 18:51:31.139633   24633 main.go:141] libmachine: (ha-313128-m02) Ensuring network mk-ha-313128 is active
	I0906 18:51:31.140092   24633 main.go:141] libmachine: (ha-313128-m02) Getting domain xml...
	I0906 18:51:31.140875   24633 main.go:141] libmachine: (ha-313128-m02) Creating domain...
	I0906 18:51:32.393306   24633 main.go:141] libmachine: (ha-313128-m02) Waiting to get IP...
	I0906 18:51:32.394205   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:32.394523   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:32.394578   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:32.394517   25019 retry.go:31] will retry after 288.850488ms: waiting for machine to come up
	I0906 18:51:32.685225   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:32.685717   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:32.685746   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:32.685671   25019 retry.go:31] will retry after 282.043787ms: waiting for machine to come up
	I0906 18:51:32.969192   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:32.969632   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:32.969658   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:32.969600   25019 retry.go:31] will retry after 363.032435ms: waiting for machine to come up
	I0906 18:51:33.334308   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:33.334785   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:33.334822   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:33.334744   25019 retry.go:31] will retry after 422.058707ms: waiting for machine to come up
	I0906 18:51:33.757898   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:33.758279   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:33.758308   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:33.758233   25019 retry.go:31] will retry after 503.499024ms: waiting for machine to come up
	I0906 18:51:34.262906   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:34.263257   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:34.263285   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:34.263218   25019 retry.go:31] will retry after 689.475949ms: waiting for machine to come up
	I0906 18:51:34.954115   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:34.954716   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:34.954751   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:34.954662   25019 retry.go:31] will retry after 1.00434144s: waiting for machine to come up
	I0906 18:51:35.960231   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:35.960587   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:35.960610   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:35.960542   25019 retry.go:31] will retry after 1.05804784s: waiting for machine to come up
	I0906 18:51:37.020099   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:37.020571   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:37.020599   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:37.020520   25019 retry.go:31] will retry after 1.215751027s: waiting for machine to come up
	I0906 18:51:38.238034   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:38.238501   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:38.238524   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:38.238453   25019 retry.go:31] will retry after 1.44067495s: waiting for machine to come up
	I0906 18:51:39.681354   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:39.681813   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:39.681848   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:39.681767   25019 retry.go:31] will retry after 2.063449934s: waiting for machine to come up
	I0906 18:51:41.746930   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:41.747407   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:41.747437   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:41.747360   25019 retry.go:31] will retry after 2.803466893s: waiting for machine to come up
	I0906 18:51:44.554086   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:44.554574   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:44.554608   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:44.554520   25019 retry.go:31] will retry after 2.881675176s: waiting for machine to come up
	I0906 18:51:47.439208   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:47.439722   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:47.439751   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:47.439671   25019 retry.go:31] will retry after 5.083573314s: waiting for machine to come up
	I0906 18:51:52.525650   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.526025   24633 main.go:141] libmachine: (ha-313128-m02) Found IP for machine: 192.168.39.32
	I0906 18:51:52.526054   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has current primary IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.526065   24633 main.go:141] libmachine: (ha-313128-m02) Reserving static IP address...
	I0906 18:51:52.526419   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find host DHCP lease matching {name: "ha-313128-m02", mac: "52:54:00:0d:cf:ee", ip: "192.168.39.32"} in network mk-ha-313128
	I0906 18:51:52.598045   24633 main.go:141] libmachine: (ha-313128-m02) Reserved static IP address: 192.168.39.32
	I0906 18:51:52.598073   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Getting to WaitForSSH function...
	I0906 18:51:52.598081   24633 main.go:141] libmachine: (ha-313128-m02) Waiting for SSH to be available...
	I0906 18:51:52.601206   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.601738   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:52.601772   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.601998   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Using SSH client type: external
	I0906 18:51:52.602018   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa (-rw-------)
	I0906 18:51:52.602046   24633 main.go:141] libmachine: (ha-313128-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:51:52.602063   24633 main.go:141] libmachine: (ha-313128-m02) DBG | About to run SSH command:
	I0906 18:51:52.602077   24633 main.go:141] libmachine: (ha-313128-m02) DBG | exit 0
	I0906 18:51:52.725210   24633 main.go:141] libmachine: (ha-313128-m02) DBG | SSH cmd err, output: <nil>: 
	I0906 18:51:52.725524   24633 main.go:141] libmachine: (ha-313128-m02) KVM machine creation complete!
	I0906 18:51:52.725862   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetConfigRaw
	I0906 18:51:52.726391   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:52.726578   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:52.726731   24633 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:51:52.726744   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:51:52.728072   24633 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:51:52.728091   24633 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:51:52.728097   24633 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:51:52.728102   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:52.730282   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.730625   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:52.730651   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.730811   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:52.730997   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.731151   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.731277   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:52.731420   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:52.731665   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:52.731682   24633 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:51:52.832298   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:51:52.832322   24633 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:51:52.832332   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:52.834998   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.835332   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:52.835360   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.835465   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:52.835700   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.835842   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.835968   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:52.836089   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:52.836237   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:52.836247   24633 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:51:52.937651   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:51:52.937719   24633 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:51:52.937727   24633 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:51:52.937740   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetMachineName
	I0906 18:51:52.937971   24633 buildroot.go:166] provisioning hostname "ha-313128-m02"
	I0906 18:51:52.937987   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetMachineName
	I0906 18:51:52.938117   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:52.941041   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.941365   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:52.941394   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.941540   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:52.941708   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.941883   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.942006   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:52.942155   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:52.942360   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:52.942378   24633 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128-m02 && echo "ha-313128-m02" | sudo tee /etc/hostname
	I0906 18:51:53.057183   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128-m02
	
	I0906 18:51:53.057211   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.059810   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.060143   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.060164   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.060345   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:53.060534   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.060718   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.060892   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:53.061063   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:53.061257   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:53.061274   24633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:51:53.170161   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:51:53.170199   24633 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 18:51:53.170219   24633 buildroot.go:174] setting up certificates
	I0906 18:51:53.170258   24633 provision.go:84] configureAuth start
	I0906 18:51:53.170278   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetMachineName
	I0906 18:51:53.170577   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:51:53.173163   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.173558   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.173587   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.173768   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.175952   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.176269   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.176296   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.176392   24633 provision.go:143] copyHostCerts
	I0906 18:51:53.176419   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:51:53.176452   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 18:51:53.176463   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:51:53.176527   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 18:51:53.176624   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:51:53.176649   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 18:51:53.176655   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:51:53.176691   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 18:51:53.176755   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:51:53.176779   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 18:51:53.176786   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:51:53.176826   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 18:51:53.176916   24633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128-m02 san=[127.0.0.1 192.168.39.32 ha-313128-m02 localhost minikube]
	I0906 18:51:53.531978   24633 provision.go:177] copyRemoteCerts
	I0906 18:51:53.532031   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:51:53.532055   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.534641   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.534972   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.534999   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.535174   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:53.535400   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.535565   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:53.535703   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 18:51:53.615451   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 18:51:53.615533   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:51:53.641667   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 18:51:53.641759   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:51:53.669096   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 18:51:53.669179   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 18:51:53.695612   24633 provision.go:87] duration metric: took 525.337896ms to configureAuth
	I0906 18:51:53.695645   24633 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:51:53.695825   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:53.695887   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.698363   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.698782   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.698810   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.698997   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:53.699207   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.699366   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.699522   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:53.699716   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:53.699901   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:53.699924   24633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 18:51:53.915727   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 18:51:53.915756   24633 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:51:53.915775   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetURL
	I0906 18:51:53.917175   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Using libvirt version 6000000
	I0906 18:51:53.919363   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.919721   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.919762   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.919875   24633 main.go:141] libmachine: Docker is up and running!
	I0906 18:51:53.919894   24633 main.go:141] libmachine: Reticulating splines...
	I0906 18:51:53.919901   24633 client.go:171] duration metric: took 23.31690762s to LocalClient.Create
	I0906 18:51:53.919925   24633 start.go:167] duration metric: took 23.316961673s to libmachine.API.Create "ha-313128"
	I0906 18:51:53.919943   24633 start.go:293] postStartSetup for "ha-313128-m02" (driver="kvm2")
	I0906 18:51:53.919959   24633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:51:53.919977   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:53.920221   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:51:53.920243   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.922141   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.922443   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.922468   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.922586   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:53.922753   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.922903   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:53.923033   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 18:51:54.007879   24633 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:51:54.012541   24633 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:51:54.012572   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 18:51:54.012633   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 18:51:54.012700   24633 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 18:51:54.012709   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 18:51:54.012788   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 18:51:54.022295   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:51:54.048093   24633 start.go:296] duration metric: took 128.135633ms for postStartSetup
	I0906 18:51:54.048145   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetConfigRaw
	I0906 18:51:54.048680   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:51:54.051341   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.051693   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.051719   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.051982   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:51:54.052393   24633 start.go:128] duration metric: took 23.466754043s to createHost
	I0906 18:51:54.052441   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:54.054574   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.054926   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.054949   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.055147   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:54.055327   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:54.055604   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:54.055746   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:54.055907   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:54.056109   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:54.056121   24633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:51:54.158010   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725648714.116348320
	
	I0906 18:51:54.158037   24633 fix.go:216] guest clock: 1725648714.116348320
	I0906 18:51:54.158048   24633 fix.go:229] Guest: 2024-09-06 18:51:54.11634832 +0000 UTC Remote: 2024-09-06 18:51:54.052421453 +0000 UTC m=+71.844651063 (delta=63.926867ms)
	I0906 18:51:54.158071   24633 fix.go:200] guest clock delta is within tolerance: 63.926867ms
	I0906 18:51:54.158081   24633 start.go:83] releasing machines lock for "ha-313128-m02", held for 23.572533563s
	I0906 18:51:54.158106   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:54.158351   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:51:54.160983   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.161491   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.161519   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.163916   24633 out.go:177] * Found network options:
	I0906 18:51:54.165233   24633 out.go:177]   - NO_PROXY=192.168.39.70
	W0906 18:51:54.166526   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 18:51:54.166557   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:54.167095   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:54.167291   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:54.167372   24633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:51:54.167411   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	W0906 18:51:54.167495   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 18:51:54.167570   24633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 18:51:54.167592   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:54.170184   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.170377   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.170565   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.170590   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.170805   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:54.170809   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.170831   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.170975   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:54.170979   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:54.171132   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:54.171134   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:54.171326   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:54.171327   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 18:51:54.171456   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 18:51:54.400160   24633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:51:54.407055   24633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:51:54.407111   24633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:51:54.425130   24633 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:51:54.425152   24633 start.go:495] detecting cgroup driver to use...
	I0906 18:51:54.425239   24633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:51:54.442658   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:51:54.457602   24633 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:51:54.457666   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:51:54.472644   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:51:54.487290   24633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:51:54.602638   24633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:51:54.769543   24633 docker.go:233] disabling docker service ...
	I0906 18:51:54.769604   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:51:54.784508   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:51:54.799154   24633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:51:54.927422   24633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:51:55.048008   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:51:55.062937   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:51:55.083211   24633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 18:51:55.083270   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.094129   24633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 18:51:55.094193   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.104791   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.116503   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.126980   24633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:51:55.138550   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.149446   24633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.167080   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.178377   24633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:51:55.187946   24633 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 18:51:55.188002   24633 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 18:51:55.203527   24633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:51:55.222751   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:51:55.340905   24633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 18:51:55.431581   24633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 18:51:55.431646   24633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 18:51:55.436404   24633 start.go:563] Will wait 60s for crictl version
	I0906 18:51:55.436485   24633 ssh_runner.go:195] Run: which crictl
	I0906 18:51:55.440395   24633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:51:55.481607   24633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 18:51:55.481694   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:51:55.512073   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:51:55.540712   24633 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 18:51:55.541928   24633 out.go:177]   - env NO_PROXY=192.168.39.70
	I0906 18:51:55.542984   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:51:55.546063   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:55.546500   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:55.546525   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:55.546782   24633 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:51:55.551222   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:51:55.563782   24633 mustload.go:65] Loading cluster: ha-313128
	I0906 18:51:55.564006   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:55.564375   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:55.564406   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:55.579244   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0906 18:51:55.579765   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:55.580261   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:55.580287   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:55.580605   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:55.580771   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:55.582340   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:51:55.582738   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:55.582769   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:55.598072   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0906 18:51:55.598492   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:55.598909   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:55.598929   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:55.599284   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:55.599472   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:55.599640   24633 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.32
	I0906 18:51:55.599649   24633 certs.go:194] generating shared ca certs ...
	I0906 18:51:55.599664   24633 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:55.599777   24633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 18:51:55.599812   24633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 18:51:55.599821   24633 certs.go:256] generating profile certs ...
	I0906 18:51:55.599884   24633 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 18:51:55.599908   24633 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.45734e05
	I0906 18:51:55.599923   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.45734e05 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.32 192.168.39.254]
	I0906 18:51:55.664204   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.45734e05 ...
	I0906 18:51:55.664233   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.45734e05: {Name:mkb4a2e0ab1ba114f51a63da71c5c0ab5250a4f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:55.664415   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.45734e05 ...
	I0906 18:51:55.664439   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.45734e05: {Name:mkf05835fddfb31126cf809ae0a4fed25c679c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:55.664566   24633 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.45734e05 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 18:51:55.664699   24633 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.45734e05 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 18:51:55.664816   24633 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 18:51:55.664844   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 18:51:55.664883   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 18:51:55.664914   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 18:51:55.664933   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 18:51:55.664951   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 18:51:55.664969   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 18:51:55.664986   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 18:51:55.665000   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 18:51:55.665050   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 18:51:55.665085   24633 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 18:51:55.665094   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:51:55.665116   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:51:55.665148   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:51:55.665189   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 18:51:55.665244   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:51:55.665288   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 18:51:55.665309   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:55.665327   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 18:51:55.665369   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:55.668143   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:55.668470   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:55.668491   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:55.668681   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:55.668886   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:55.669057   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:55.669166   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:55.745232   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 18:51:55.751412   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 18:51:55.765984   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 18:51:55.770489   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 18:51:55.782003   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 18:51:55.786857   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 18:51:55.798862   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 18:51:55.803225   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0906 18:51:55.813358   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 18:51:55.817418   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 18:51:55.827594   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 18:51:55.831544   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0906 18:51:55.843360   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:51:55.869870   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 18:51:55.894969   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:51:55.919286   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:51:55.944458   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0906 18:51:55.968696   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 18:51:55.992704   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:51:56.015928   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:51:56.038934   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 18:51:56.062758   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:51:56.086178   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 18:51:56.109157   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 18:51:56.126213   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 18:51:56.144980   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 18:51:56.163980   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0906 18:51:56.181686   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 18:51:56.200170   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0906 18:51:56.217739   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 18:51:56.236591   24633 ssh_runner.go:195] Run: openssl version
	I0906 18:51:56.242674   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:51:56.254908   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:56.259760   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:56.259809   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:56.266372   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:51:56.277202   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 18:51:56.288013   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 18:51:56.292440   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 18:51:56.292490   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 18:51:56.298189   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 18:51:56.308729   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 18:51:56.319322   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 18:51:56.323443   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 18:51:56.323486   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 18:51:56.328874   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 18:51:56.339327   24633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:51:56.343147   24633 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:51:56.343199   24633 kubeadm.go:934] updating node {m02 192.168.39.32 8443 v1.31.0 crio true true} ...
	I0906 18:51:56.343297   24633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:51:56.343324   24633 kube-vip.go:115] generating kube-vip config ...
	I0906 18:51:56.343360   24633 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 18:51:56.360229   24633 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 18:51:56.360317   24633 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 18:51:56.360373   24633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:51:56.370531   24633 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0906 18:51:56.370590   24633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0906 18:51:56.379939   24633 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0906 18:51:56.379974   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0906 18:51:56.380040   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0906 18:51:56.380051   24633 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0906 18:51:56.380081   24633 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0906 18:51:56.384231   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0906 18:51:56.384260   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0906 18:51:56.986596   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0906 18:51:56.986724   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0906 18:51:56.992796   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0906 18:51:56.992827   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0906 18:51:57.271779   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:51:57.287745   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0906 18:51:57.287836   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0906 18:51:57.293546   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0906 18:51:57.293586   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0906 18:51:57.620524   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 18:51:57.629974   24633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 18:51:57.646374   24633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:51:57.662738   24633 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0906 18:51:57.679087   24633 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 18:51:57.682857   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:51:57.695646   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:51:57.820090   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:51:57.837441   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:51:57.837817   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:57.837860   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:57.852429   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0906 18:51:57.852901   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:57.853376   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:57.853397   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:57.853713   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:57.853917   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:57.854070   24633 start.go:317] joinCluster: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.16
8.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:51:57.854195   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0906 18:51:57.854218   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:57.857048   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:57.857524   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:57.857553   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:57.857782   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:57.857955   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:57.858104   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:57.858241   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:58.001758   24633 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:51:58.001809   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token emqixv.kkhhq8mwvy4cltk9 --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-313128-m02 --control-plane --apiserver-advertise-address=192.168.39.32 --apiserver-bind-port=8443"
	I0906 18:52:20.534856   24633 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token emqixv.kkhhq8mwvy4cltk9 --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-313128-m02 --control-plane --apiserver-advertise-address=192.168.39.32 --apiserver-bind-port=8443": (22.533021448s)
	I0906 18:52:20.534908   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0906 18:52:21.036721   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-313128-m02 minikube.k8s.io/updated_at=2024_09_06T18_52_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=ha-313128 minikube.k8s.io/primary=false
	I0906 18:52:21.144223   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-313128-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0906 18:52:21.236895   24633 start.go:319] duration metric: took 23.382822757s to joinCluster
	I0906 18:52:21.237034   24633 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:52:21.237311   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:52:21.238436   24633 out.go:177] * Verifying Kubernetes components...
	I0906 18:52:21.239623   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:52:21.453669   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:52:21.475521   24633 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:52:21.475854   24633 kapi.go:59] client config for ha-313128: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt", KeyFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key", CAFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 18:52:21.475946   24633 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.70:8443
	I0906 18:52:21.476228   24633 node_ready.go:35] waiting up to 6m0s for node "ha-313128-m02" to be "Ready" ...
	I0906 18:52:21.476348   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:21.476360   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:21.476371   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:21.476381   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:21.499552   24633 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0906 18:52:21.976507   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:21.976533   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:21.976545   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:21.976552   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:21.985880   24633 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0906 18:52:22.476771   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:22.476796   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:22.476808   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:22.476815   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:22.514723   24633 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0906 18:52:22.976806   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:22.976831   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:22.976843   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:22.976848   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:22.985889   24633 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0906 18:52:23.476790   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:23.476815   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:23.476826   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:23.476834   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:23.494440   24633 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0906 18:52:23.495067   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:23.977449   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:23.977471   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:23.977500   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:23.977507   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:24.083583   24633 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I0906 18:52:24.476646   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:24.476677   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:24.476688   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:24.476695   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:24.480633   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:24.976619   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:24.976639   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:24.976647   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:24.976652   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:24.979550   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:25.476556   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:25.476578   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:25.476586   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:25.476591   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:25.482148   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:25.977279   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:25.977300   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:25.977306   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:25.977310   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:25.981396   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:25.982519   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:26.476895   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:26.476918   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:26.476925   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:26.476929   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:26.480635   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:26.976709   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:26.976732   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:26.976740   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:26.976748   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:26.979883   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:27.477476   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:27.477499   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:27.477511   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:27.477516   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:27.483649   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:52:27.976824   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:27.976866   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:27.976878   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:27.976884   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:27.979837   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:28.476692   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:28.476712   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:28.476720   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:28.476724   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:28.479731   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:28.480725   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:28.977152   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:28.977174   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:28.977184   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:28.977188   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:28.980274   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:29.477277   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:29.477300   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:29.477310   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:29.477316   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:29.484774   24633 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 18:52:29.977232   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:29.977253   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:29.977261   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:29.977265   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:29.980398   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:30.476483   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:30.476507   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:30.476516   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:30.476520   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:30.479630   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:30.976384   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:30.976408   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:30.976417   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:30.976422   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:30.979366   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:30.980142   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:31.476436   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:31.476458   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:31.476466   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:31.476470   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:31.482330   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:31.976641   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:31.976671   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:31.976680   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:31.976687   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:31.979507   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:32.477379   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:32.477400   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:32.477408   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:32.477411   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:32.480314   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:32.976836   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:32.976871   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:32.976883   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:32.976890   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:32.988922   24633 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 18:52:32.989409   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:33.476761   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:33.476786   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:33.476797   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:33.476802   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:33.482012   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:33.976791   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:33.976810   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:33.976819   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:33.976822   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:33.979927   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:34.477153   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:34.477175   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:34.477182   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:34.477187   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:34.480048   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:34.977233   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:34.977254   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:34.977261   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:34.977265   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:34.980346   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:35.477347   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:35.477380   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.477387   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.477391   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.483375   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:35.484012   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:35.976573   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:35.976595   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.976606   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.976611   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.979492   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:35.980085   24633 node_ready.go:49] node "ha-313128-m02" has status "Ready":"True"
	I0906 18:52:35.980104   24633 node_ready.go:38] duration metric: took 14.503855476s for node "ha-313128-m02" to be "Ready" ...
	I0906 18:52:35.980115   24633 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:52:35.980210   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:35.980221   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.980230   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.980235   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.984206   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:35.991932   24633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:35.992021   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-gccvh
	I0906 18:52:35.992033   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.992041   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.992047   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.995101   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:35.995664   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:35.995680   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.995695   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.995699   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.998302   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:35.998982   24633 pod_ready.go:93] pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:35.999000   24633 pod_ready.go:82] duration metric: took 7.045331ms for pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:35.999008   24633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:35.999056   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-gk28z
	I0906 18:52:35.999063   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.999070   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.999073   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.001831   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.002473   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:36.002488   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.002495   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.002500   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.005397   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.006213   24633 pod_ready.go:93] pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:36.006228   24633 pod_ready.go:82] duration metric: took 7.214096ms for pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:36.006238   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:36.006284   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128
	I0906 18:52:36.006296   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.006303   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.006307   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.008599   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.009377   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:36.009391   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.009398   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.009402   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.012269   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.012885   24633 pod_ready.go:93] pod "etcd-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:36.012904   24633 pod_ready.go:82] duration metric: took 6.659121ms for pod "etcd-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:36.012928   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:36.012985   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:36.012993   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.012999   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.013003   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.015599   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.016661   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:36.016675   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.016681   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.016686   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.019340   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.513636   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:36.513665   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.513675   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.513681   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.517307   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:36.518008   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:36.518023   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.518029   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.518034   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.520463   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:37.013212   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:37.013239   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:37.013248   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:37.013251   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:37.016567   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:37.017173   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:37.017190   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:37.017201   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:37.017205   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:37.019356   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:37.513989   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:37.514013   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:37.514021   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:37.514024   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:37.517392   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:37.518329   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:37.518347   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:37.518357   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:37.518365   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:37.520918   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:38.013675   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:38.013699   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:38.013707   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:38.013711   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:38.024772   24633 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0906 18:52:38.025369   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:38.025387   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:38.025397   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:38.025402   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:38.030416   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:38.030940   24633 pod_ready.go:103] pod "etcd-ha-313128-m02" in "kube-system" namespace has status "Ready":"False"
	I0906 18:52:38.513208   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:38.513230   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:38.513237   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:38.513245   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:38.516361   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:38.517012   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:38.517033   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:38.517041   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:38.517046   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:38.519644   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.014111   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:39.014137   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.014148   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.014155   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.018177   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:39.018872   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.018888   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.018895   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.018899   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.021072   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.021575   24633 pod_ready.go:93] pod "etcd-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.021594   24633 pod_ready.go:82] duration metric: took 3.008654084s for pod "etcd-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.021615   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.021669   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128
	I0906 18:52:39.021677   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.021684   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.021690   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.023922   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.024564   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:39.024578   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.024585   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.024590   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.026527   24633 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 18:52:39.027115   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.027137   24633 pod_ready.go:82] duration metric: took 5.508891ms for pod "kube-apiserver-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.027147   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.027203   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m02
	I0906 18:52:39.027213   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.027223   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.027231   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.029427   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.177388   24633 request.go:632] Waited for 147.307588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.177449   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.177456   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.177467   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.177486   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.180429   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.181364   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.181385   24633 pod_ready.go:82] duration metric: took 154.23065ms for pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.181397   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.376766   24633 request.go:632] Waited for 195.274368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128
	I0906 18:52:39.376882   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128
	I0906 18:52:39.376895   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.376909   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.376917   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.380203   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:39.577255   24633 request.go:632] Waited for 196.270673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:39.577340   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:39.577351   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.577362   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.577369   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.580260   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.580713   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.580730   24633 pod_ready.go:82] duration metric: took 399.322629ms for pod "kube-controller-manager-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.580744   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.776877   24633 request.go:632] Waited for 196.02646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m02
	I0906 18:52:39.776928   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m02
	I0906 18:52:39.776933   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.776940   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.776946   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.779995   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:39.977057   24633 request.go:632] Waited for 196.350023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.977112   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.977117   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.977124   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.977129   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.980556   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:39.981147   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.981167   24633 pod_ready.go:82] duration metric: took 400.414888ms for pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.981182   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h5xn7" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.177276   24633 request.go:632] Waited for 196.01678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5xn7
	I0906 18:52:40.177341   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5xn7
	I0906 18:52:40.177346   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.177353   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.177360   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.181270   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:40.377330   24633 request.go:632] Waited for 195.375056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:40.377418   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:40.377425   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.377438   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.377445   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.380818   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:40.381384   24633 pod_ready.go:93] pod "kube-proxy-h5xn7" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:40.381401   24633 pod_ready.go:82] duration metric: took 400.208949ms for pod "kube-proxy-h5xn7" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.381410   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xjp6p" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.577561   24633 request.go:632] Waited for 196.067497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjp6p
	I0906 18:52:40.577630   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjp6p
	I0906 18:52:40.577639   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.577650   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.577661   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.581754   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:40.776916   24633 request.go:632] Waited for 194.18645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:40.776995   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:40.777003   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.777013   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.777022   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.781043   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:40.781927   24633 pod_ready.go:93] pod "kube-proxy-xjp6p" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:40.781945   24633 pod_ready.go:82] duration metric: took 400.528095ms for pod "kube-proxy-xjp6p" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.781954   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.977226   24633 request.go:632] Waited for 195.19516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128
	I0906 18:52:40.977304   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128
	I0906 18:52:40.977311   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.977322   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.977331   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.981411   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:41.176594   24633 request.go:632] Waited for 194.339343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:41.176659   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:41.176664   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.176675   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.176689   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.180585   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:41.181224   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:41.181246   24633 pod_ready.go:82] duration metric: took 399.28558ms for pod "kube-scheduler-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:41.181256   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:41.377364   24633 request.go:632] Waited for 196.025341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m02
	I0906 18:52:41.377418   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m02
	I0906 18:52:41.377424   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.377431   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.377434   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.381071   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:41.577294   24633 request.go:632] Waited for 195.374529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:41.577367   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:41.577376   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.577383   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.577392   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.581274   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:41.582162   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:41.582179   24633 pod_ready.go:82] duration metric: took 400.916754ms for pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:41.582189   24633 pod_ready.go:39] duration metric: took 5.602061956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:52:41.582208   24633 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:52:41.582266   24633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:52:41.598573   24633 api_server.go:72] duration metric: took 20.361479931s to wait for apiserver process to appear ...
	I0906 18:52:41.598597   24633 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:52:41.598619   24633 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0906 18:52:41.604030   24633 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I0906 18:52:41.604099   24633 round_trippers.go:463] GET https://192.168.39.70:8443/version
	I0906 18:52:41.604108   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.604116   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.604122   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.605093   24633 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 18:52:41.605195   24633 api_server.go:141] control plane version: v1.31.0
	I0906 18:52:41.605213   24633 api_server.go:131] duration metric: took 6.609497ms to wait for apiserver health ...
	I0906 18:52:41.605223   24633 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:52:41.776652   24633 request.go:632] Waited for 171.293715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:41.776721   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:41.776728   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.776738   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.776743   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.782425   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:41.787311   24633 system_pods.go:59] 17 kube-system pods found
	I0906 18:52:41.787363   24633 system_pods.go:61] "coredns-6f6b679f8f-gccvh" [9b7c0e1a-3359-4f9f-826c-b75cbdfcd500] Running
	I0906 18:52:41.787373   24633 system_pods.go:61] "coredns-6f6b679f8f-gk28z" [ab595ef6-eaa8-44a0-bdad-ddd59c8d052d] Running
	I0906 18:52:41.787379   24633 system_pods.go:61] "etcd-ha-313128" [b4550b86-3359-44e6-a495-7db003a1bb95] Running
	I0906 18:52:41.787389   24633 system_pods.go:61] "etcd-ha-313128-m02" [d42fd6b2-4ecd-49e8-b5b2-5b29fabe2d1e] Running
	I0906 18:52:41.787394   24633 system_pods.go:61] "kindnet-h2trt" [90af3550-1fae-46bd-9329-f185fcdb23c6] Running
	I0906 18:52:41.787400   24633 system_pods.go:61] "kindnet-t65ls" [657498aa-b76e-4eb2-abbe-5d8a050fc415] Running
	I0906 18:52:41.787407   24633 system_pods.go:61] "kube-apiserver-ha-313128" [081ff647-e9c5-4cce-895a-e5e660db1acc] Running
	I0906 18:52:41.787413   24633 system_pods.go:61] "kube-apiserver-ha-313128-m02" [bed2808b-bef1-4fd4-a811-762a5ff46343] Running
	I0906 18:52:41.787419   24633 system_pods.go:61] "kube-controller-manager-ha-313128" [ba0308a5-06d8-468b-a3a1-e95a28a52dd7] Running
	I0906 18:52:41.787428   24633 system_pods.go:61] "kube-controller-manager-ha-313128-m02" [3f4032ce-c8b4-4a2c-9384-82d5d5ec0874] Running
	I0906 18:52:41.787433   24633 system_pods.go:61] "kube-proxy-h5xn7" [e45358c5-398e-4d33-9bd0-a4f28ce17ac9] Running
	I0906 18:52:41.787438   24633 system_pods.go:61] "kube-proxy-xjp6p" [0cbbf003-361c-441e-a2fe-18783999b020] Running
	I0906 18:52:41.787446   24633 system_pods.go:61] "kube-scheduler-ha-313128" [8580599b-125e-4a2f-9019-41b305c0f611] Running
	I0906 18:52:41.787454   24633 system_pods.go:61] "kube-scheduler-ha-313128-m02" [81cb0c5f-7e54-4e8c-b089-d6a4e2c9cbf0] Running
	I0906 18:52:41.787459   24633 system_pods.go:61] "kube-vip-ha-313128" [6e270949-38fe-475f-b902-ede9d2cb795f] Running
	I0906 18:52:41.787464   24633 system_pods.go:61] "kube-vip-ha-313128-m02" [949996a0-0ce0-4ce4-b9ec-86c8f35a4a96] Running
	I0906 18:52:41.787470   24633 system_pods.go:61] "storage-provisioner" [6c957eac-7904-4c39-b858-bfb7da32c75c] Running
	I0906 18:52:41.787479   24633 system_pods.go:74] duration metric: took 182.248108ms to wait for pod list to return data ...
	I0906 18:52:41.787490   24633 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:52:41.976938   24633 request.go:632] Waited for 189.371408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0906 18:52:41.977003   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0906 18:52:41.977009   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.977019   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.977026   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.981174   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:41.981432   24633 default_sa.go:45] found service account: "default"
	I0906 18:52:41.981453   24633 default_sa.go:55] duration metric: took 193.950991ms for default service account to be created ...
	I0906 18:52:41.981463   24633 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:52:42.176877   24633 request.go:632] Waited for 195.280058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:42.176942   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:42.176949   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:42.176959   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:42.176967   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:42.183456   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:52:42.187964   24633 system_pods.go:86] 17 kube-system pods found
	I0906 18:52:42.187995   24633 system_pods.go:89] "coredns-6f6b679f8f-gccvh" [9b7c0e1a-3359-4f9f-826c-b75cbdfcd500] Running
	I0906 18:52:42.188001   24633 system_pods.go:89] "coredns-6f6b679f8f-gk28z" [ab595ef6-eaa8-44a0-bdad-ddd59c8d052d] Running
	I0906 18:52:42.188005   24633 system_pods.go:89] "etcd-ha-313128" [b4550b86-3359-44e6-a495-7db003a1bb95] Running
	I0906 18:52:42.188009   24633 system_pods.go:89] "etcd-ha-313128-m02" [d42fd6b2-4ecd-49e8-b5b2-5b29fabe2d1e] Running
	I0906 18:52:42.188012   24633 system_pods.go:89] "kindnet-h2trt" [90af3550-1fae-46bd-9329-f185fcdb23c6] Running
	I0906 18:52:42.188016   24633 system_pods.go:89] "kindnet-t65ls" [657498aa-b76e-4eb2-abbe-5d8a050fc415] Running
	I0906 18:52:42.188020   24633 system_pods.go:89] "kube-apiserver-ha-313128" [081ff647-e9c5-4cce-895a-e5e660db1acc] Running
	I0906 18:52:42.188024   24633 system_pods.go:89] "kube-apiserver-ha-313128-m02" [bed2808b-bef1-4fd4-a811-762a5ff46343] Running
	I0906 18:52:42.188027   24633 system_pods.go:89] "kube-controller-manager-ha-313128" [ba0308a5-06d8-468b-a3a1-e95a28a52dd7] Running
	I0906 18:52:42.188030   24633 system_pods.go:89] "kube-controller-manager-ha-313128-m02" [3f4032ce-c8b4-4a2c-9384-82d5d5ec0874] Running
	I0906 18:52:42.188035   24633 system_pods.go:89] "kube-proxy-h5xn7" [e45358c5-398e-4d33-9bd0-a4f28ce17ac9] Running
	I0906 18:52:42.188038   24633 system_pods.go:89] "kube-proxy-xjp6p" [0cbbf003-361c-441e-a2fe-18783999b020] Running
	I0906 18:52:42.188040   24633 system_pods.go:89] "kube-scheduler-ha-313128" [8580599b-125e-4a2f-9019-41b305c0f611] Running
	I0906 18:52:42.188043   24633 system_pods.go:89] "kube-scheduler-ha-313128-m02" [81cb0c5f-7e54-4e8c-b089-d6a4e2c9cbf0] Running
	I0906 18:52:42.188046   24633 system_pods.go:89] "kube-vip-ha-313128" [6e270949-38fe-475f-b902-ede9d2cb795f] Running
	I0906 18:52:42.188049   24633 system_pods.go:89] "kube-vip-ha-313128-m02" [949996a0-0ce0-4ce4-b9ec-86c8f35a4a96] Running
	I0906 18:52:42.188052   24633 system_pods.go:89] "storage-provisioner" [6c957eac-7904-4c39-b858-bfb7da32c75c] Running
	I0906 18:52:42.188057   24633 system_pods.go:126] duration metric: took 206.585774ms to wait for k8s-apps to be running ...
	I0906 18:52:42.188065   24633 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:52:42.188104   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:52:42.202879   24633 system_svc.go:56] duration metric: took 14.807481ms WaitForService to wait for kubelet
	I0906 18:52:42.202905   24633 kubeadm.go:582] duration metric: took 20.965817345s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:52:42.202932   24633 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:52:42.377174   24633 request.go:632] Waited for 174.162112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes
	I0906 18:52:42.377231   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0906 18:52:42.377238   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:42.377249   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:42.377257   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:42.381619   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:42.382336   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:52:42.382360   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:52:42.382386   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:52:42.382390   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:52:42.382394   24633 node_conditions.go:105] duration metric: took 179.458216ms to run NodePressure ...
	I0906 18:52:42.382408   24633 start.go:241] waiting for startup goroutines ...
	I0906 18:52:42.382439   24633 start.go:255] writing updated cluster config ...
	I0906 18:52:42.384374   24633 out.go:201] 
	I0906 18:52:42.385896   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:52:42.385977   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:52:42.387373   24633 out.go:177] * Starting "ha-313128-m03" control-plane node in "ha-313128" cluster
	I0906 18:52:42.388310   24633 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:52:42.388331   24633 cache.go:56] Caching tarball of preloaded images
	I0906 18:52:42.388442   24633 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 18:52:42.388454   24633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 18:52:42.388533   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:52:42.388784   24633 start.go:360] acquireMachinesLock for ha-313128-m03: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:52:42.388822   24633 start.go:364] duration metric: took 22.001µs to acquireMachinesLock for "ha-313128-m03"
	I0906 18:52:42.388840   24633 start.go:93] Provisioning new machine with config: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provi
sioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:52:42.388949   24633 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0906 18:52:42.390247   24633 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 18:52:42.390362   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:52:42.390394   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:52:42.405591   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0906 18:52:42.406111   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:52:42.406615   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:52:42.406634   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:52:42.406956   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:52:42.407134   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetMachineName
	I0906 18:52:42.407289   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:52:42.407430   24633 start.go:159] libmachine.API.Create for "ha-313128" (driver="kvm2")
	I0906 18:52:42.407466   24633 client.go:168] LocalClient.Create starting
	I0906 18:52:42.407501   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 18:52:42.407543   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:52:42.407566   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:52:42.407635   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 18:52:42.407671   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:52:42.407686   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:52:42.407708   24633 main.go:141] libmachine: Running pre-create checks...
	I0906 18:52:42.407719   24633 main.go:141] libmachine: (ha-313128-m03) Calling .PreCreateCheck
	I0906 18:52:42.407960   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetConfigRaw
	I0906 18:52:42.408419   24633 main.go:141] libmachine: Creating machine...
	I0906 18:52:42.408431   24633 main.go:141] libmachine: (ha-313128-m03) Calling .Create
	I0906 18:52:42.408578   24633 main.go:141] libmachine: (ha-313128-m03) Creating KVM machine...
	I0906 18:52:42.409894   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found existing default KVM network
	I0906 18:52:42.410024   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found existing private KVM network mk-ha-313128
	I0906 18:52:42.410166   24633 main.go:141] libmachine: (ha-313128-m03) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03 ...
	I0906 18:52:42.410183   24633 main.go:141] libmachine: (ha-313128-m03) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:52:42.410295   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:42.410178   25383 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:52:42.410429   24633 main.go:141] libmachine: (ha-313128-m03) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:52:42.672936   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:42.672778   25383 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa...
	I0906 18:52:42.960450   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:42.960318   25383 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/ha-313128-m03.rawdisk...
	I0906 18:52:42.960474   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Writing magic tar header
	I0906 18:52:42.960485   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Writing SSH key tar header
	I0906 18:52:42.960498   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:42.960465   25383 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03 ...
	I0906 18:52:42.960595   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03
	I0906 18:52:42.960628   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 18:52:42.960638   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:52:42.960646   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03 (perms=drwx------)
	I0906 18:52:42.960653   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 18:52:42.960681   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:52:42.960704   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 18:52:42.960716   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:52:42.960729   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:52:42.960744   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 18:52:42.960757   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:52:42.960767   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home
	I0906 18:52:42.960787   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Skipping /home - not owner
	I0906 18:52:42.960805   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:52:42.960816   24633 main.go:141] libmachine: (ha-313128-m03) Creating domain...
	I0906 18:52:42.961768   24633 main.go:141] libmachine: (ha-313128-m03) define libvirt domain using xml: 
	I0906 18:52:42.961791   24633 main.go:141] libmachine: (ha-313128-m03) <domain type='kvm'>
	I0906 18:52:42.961802   24633 main.go:141] libmachine: (ha-313128-m03)   <name>ha-313128-m03</name>
	I0906 18:52:42.961814   24633 main.go:141] libmachine: (ha-313128-m03)   <memory unit='MiB'>2200</memory>
	I0906 18:52:42.961823   24633 main.go:141] libmachine: (ha-313128-m03)   <vcpu>2</vcpu>
	I0906 18:52:42.961836   24633 main.go:141] libmachine: (ha-313128-m03)   <features>
	I0906 18:52:42.961849   24633 main.go:141] libmachine: (ha-313128-m03)     <acpi/>
	I0906 18:52:42.961859   24633 main.go:141] libmachine: (ha-313128-m03)     <apic/>
	I0906 18:52:42.961867   24633 main.go:141] libmachine: (ha-313128-m03)     <pae/>
	I0906 18:52:42.961877   24633 main.go:141] libmachine: (ha-313128-m03)     
	I0906 18:52:42.961891   24633 main.go:141] libmachine: (ha-313128-m03)   </features>
	I0906 18:52:42.961903   24633 main.go:141] libmachine: (ha-313128-m03)   <cpu mode='host-passthrough'>
	I0906 18:52:42.961913   24633 main.go:141] libmachine: (ha-313128-m03)   
	I0906 18:52:42.961920   24633 main.go:141] libmachine: (ha-313128-m03)   </cpu>
	I0906 18:52:42.961932   24633 main.go:141] libmachine: (ha-313128-m03)   <os>
	I0906 18:52:42.961940   24633 main.go:141] libmachine: (ha-313128-m03)     <type>hvm</type>
	I0906 18:52:42.961952   24633 main.go:141] libmachine: (ha-313128-m03)     <boot dev='cdrom'/>
	I0906 18:52:42.961961   24633 main.go:141] libmachine: (ha-313128-m03)     <boot dev='hd'/>
	I0906 18:52:42.961973   24633 main.go:141] libmachine: (ha-313128-m03)     <bootmenu enable='no'/>
	I0906 18:52:42.961982   24633 main.go:141] libmachine: (ha-313128-m03)   </os>
	I0906 18:52:42.961993   24633 main.go:141] libmachine: (ha-313128-m03)   <devices>
	I0906 18:52:42.962000   24633 main.go:141] libmachine: (ha-313128-m03)     <disk type='file' device='cdrom'>
	I0906 18:52:42.962016   24633 main.go:141] libmachine: (ha-313128-m03)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/boot2docker.iso'/>
	I0906 18:52:42.962028   24633 main.go:141] libmachine: (ha-313128-m03)       <target dev='hdc' bus='scsi'/>
	I0906 18:52:42.962037   24633 main.go:141] libmachine: (ha-313128-m03)       <readonly/>
	I0906 18:52:42.962047   24633 main.go:141] libmachine: (ha-313128-m03)     </disk>
	I0906 18:52:42.962059   24633 main.go:141] libmachine: (ha-313128-m03)     <disk type='file' device='disk'>
	I0906 18:52:42.962071   24633 main.go:141] libmachine: (ha-313128-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:52:42.962082   24633 main.go:141] libmachine: (ha-313128-m03)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/ha-313128-m03.rawdisk'/>
	I0906 18:52:42.962094   24633 main.go:141] libmachine: (ha-313128-m03)       <target dev='hda' bus='virtio'/>
	I0906 18:52:42.962105   24633 main.go:141] libmachine: (ha-313128-m03)     </disk>
	I0906 18:52:42.962116   24633 main.go:141] libmachine: (ha-313128-m03)     <interface type='network'>
	I0906 18:52:42.962132   24633 main.go:141] libmachine: (ha-313128-m03)       <source network='mk-ha-313128'/>
	I0906 18:52:42.962142   24633 main.go:141] libmachine: (ha-313128-m03)       <model type='virtio'/>
	I0906 18:52:42.962153   24633 main.go:141] libmachine: (ha-313128-m03)     </interface>
	I0906 18:52:42.962162   24633 main.go:141] libmachine: (ha-313128-m03)     <interface type='network'>
	I0906 18:52:42.962170   24633 main.go:141] libmachine: (ha-313128-m03)       <source network='default'/>
	I0906 18:52:42.962179   24633 main.go:141] libmachine: (ha-313128-m03)       <model type='virtio'/>
	I0906 18:52:42.962207   24633 main.go:141] libmachine: (ha-313128-m03)     </interface>
	I0906 18:52:42.962228   24633 main.go:141] libmachine: (ha-313128-m03)     <serial type='pty'>
	I0906 18:52:42.962241   24633 main.go:141] libmachine: (ha-313128-m03)       <target port='0'/>
	I0906 18:52:42.962254   24633 main.go:141] libmachine: (ha-313128-m03)     </serial>
	I0906 18:52:42.962284   24633 main.go:141] libmachine: (ha-313128-m03)     <console type='pty'>
	I0906 18:52:42.962312   24633 main.go:141] libmachine: (ha-313128-m03)       <target type='serial' port='0'/>
	I0906 18:52:42.962329   24633 main.go:141] libmachine: (ha-313128-m03)     </console>
	I0906 18:52:42.962340   24633 main.go:141] libmachine: (ha-313128-m03)     <rng model='virtio'>
	I0906 18:52:42.962351   24633 main.go:141] libmachine: (ha-313128-m03)       <backend model='random'>/dev/random</backend>
	I0906 18:52:42.962361   24633 main.go:141] libmachine: (ha-313128-m03)     </rng>
	I0906 18:52:42.962369   24633 main.go:141] libmachine: (ha-313128-m03)     
	I0906 18:52:42.962378   24633 main.go:141] libmachine: (ha-313128-m03)     
	I0906 18:52:42.962386   24633 main.go:141] libmachine: (ha-313128-m03)   </devices>
	I0906 18:52:42.962395   24633 main.go:141] libmachine: (ha-313128-m03) </domain>
	I0906 18:52:42.962405   24633 main.go:141] libmachine: (ha-313128-m03) 
	I0906 18:52:42.968960   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:6e:42:bb in network default
	I0906 18:52:42.969654   24633 main.go:141] libmachine: (ha-313128-m03) Ensuring networks are active...
	I0906 18:52:42.969681   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:42.970455   24633 main.go:141] libmachine: (ha-313128-m03) Ensuring network default is active
	I0906 18:52:42.970789   24633 main.go:141] libmachine: (ha-313128-m03) Ensuring network mk-ha-313128 is active
	I0906 18:52:42.971179   24633 main.go:141] libmachine: (ha-313128-m03) Getting domain xml...
	I0906 18:52:42.971917   24633 main.go:141] libmachine: (ha-313128-m03) Creating domain...
	I0906 18:52:44.206269   24633 main.go:141] libmachine: (ha-313128-m03) Waiting to get IP...
	I0906 18:52:44.207290   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:44.207825   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:44.207851   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:44.207812   25383 retry.go:31] will retry after 269.325849ms: waiting for machine to come up
	I0906 18:52:44.479059   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:44.479551   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:44.479580   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:44.479501   25383 retry.go:31] will retry after 259.571768ms: waiting for machine to come up
	I0906 18:52:44.741020   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:44.741529   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:44.741561   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:44.741486   25383 retry.go:31] will retry after 344.482395ms: waiting for machine to come up
	I0906 18:52:45.087978   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:45.088479   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:45.088508   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:45.088430   25383 retry.go:31] will retry after 469.573996ms: waiting for machine to come up
	I0906 18:52:45.559051   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:45.559525   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:45.559558   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:45.559474   25383 retry.go:31] will retry after 549.907681ms: waiting for machine to come up
	I0906 18:52:46.111222   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:46.111794   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:46.111824   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:46.111739   25383 retry.go:31] will retry after 897.894422ms: waiting for machine to come up
	I0906 18:52:47.011456   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:47.012300   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:47.012332   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:47.012240   25383 retry.go:31] will retry after 1.023510644s: waiting for machine to come up
	I0906 18:52:48.037255   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:48.037760   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:48.037788   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:48.037710   25383 retry.go:31] will retry after 1.096197794s: waiting for machine to come up
	I0906 18:52:49.135190   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:49.135772   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:49.135799   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:49.135721   25383 retry.go:31] will retry after 1.322554958s: waiting for machine to come up
	I0906 18:52:50.459897   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:50.460204   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:50.460224   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:50.460165   25383 retry.go:31] will retry after 1.619516894s: waiting for machine to come up
	I0906 18:52:52.081273   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:52.081758   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:52.081788   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:52.081702   25383 retry.go:31] will retry after 1.955341722s: waiting for machine to come up
	I0906 18:52:54.038968   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:54.039367   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:54.039421   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:54.039323   25383 retry.go:31] will retry after 2.472747912s: waiting for machine to come up
	I0906 18:52:56.513791   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:56.514187   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:56.514211   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:56.514144   25383 retry.go:31] will retry after 3.605132636s: waiting for machine to come up
	I0906 18:53:00.121842   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:00.122311   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:53:00.122332   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:53:00.122283   25383 retry.go:31] will retry after 5.401636488s: waiting for machine to come up
	I0906 18:53:05.527338   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.527877   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has current primary IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.527899   24633 main.go:141] libmachine: (ha-313128-m03) Found IP for machine: 192.168.39.172
	I0906 18:53:05.527911   24633 main.go:141] libmachine: (ha-313128-m03) Reserving static IP address...
	I0906 18:53:05.528327   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find host DHCP lease matching {name: "ha-313128-m03", mac: "52:54:00:90:b3:07", ip: "192.168.39.172"} in network mk-ha-313128
	I0906 18:53:05.601029   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Getting to WaitForSSH function...
	I0906 18:53:05.601061   24633 main.go:141] libmachine: (ha-313128-m03) Reserved static IP address: 192.168.39.172
	I0906 18:53:05.601079   24633 main.go:141] libmachine: (ha-313128-m03) Waiting for SSH to be available...
	I0906 18:53:05.603690   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.604143   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:05.604168   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.604367   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Using SSH client type: external
	I0906 18:53:05.604394   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa (-rw-------)
	I0906 18:53:05.604423   24633 main.go:141] libmachine: (ha-313128-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:53:05.604439   24633 main.go:141] libmachine: (ha-313128-m03) DBG | About to run SSH command:
	I0906 18:53:05.604451   24633 main.go:141] libmachine: (ha-313128-m03) DBG | exit 0
	I0906 18:53:05.729014   24633 main.go:141] libmachine: (ha-313128-m03) DBG | SSH cmd err, output: <nil>: 
	I0906 18:53:05.729277   24633 main.go:141] libmachine: (ha-313128-m03) KVM machine creation complete!
	I0906 18:53:05.729579   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetConfigRaw
	I0906 18:53:05.730093   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:05.730321   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:05.730492   24633 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:53:05.730505   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:53:05.731649   24633 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:53:05.731662   24633 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:53:05.731673   24633 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:53:05.731679   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:05.733873   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.734243   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:05.734274   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.734383   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:05.734581   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.734727   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.734833   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:05.734991   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:05.735215   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:05.735239   24633 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:53:05.840443   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:53:05.840475   24633 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:53:05.840485   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:05.843177   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.843554   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:05.843583   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.843765   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:05.843954   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.844086   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.844184   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:05.844380   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:05.844548   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:05.844558   24633 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:53:05.949677   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:53:05.949774   24633 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:53:05.949784   24633 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:53:05.949793   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetMachineName
	I0906 18:53:05.950038   24633 buildroot.go:166] provisioning hostname "ha-313128-m03"
	I0906 18:53:05.950059   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetMachineName
	I0906 18:53:05.950201   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:05.952795   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.953180   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:05.953212   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.953325   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:05.953498   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.953649   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.953814   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:05.953954   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:05.954108   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:05.954118   24633 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128-m03 && echo "ha-313128-m03" | sudo tee /etc/hostname
	I0906 18:53:06.072413   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128-m03
	
	I0906 18:53:06.072439   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.075110   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.075526   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.075554   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.075831   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.076026   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.076220   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.076328   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.076519   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:06.076679   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:06.076697   24633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:53:06.191781   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:53:06.191813   24633 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 18:53:06.191834   24633 buildroot.go:174] setting up certificates
	I0906 18:53:06.191848   24633 provision.go:84] configureAuth start
	I0906 18:53:06.191861   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetMachineName
	I0906 18:53:06.192106   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:53:06.194630   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.194897   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.194923   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.195124   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.197545   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.197899   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.197925   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.198058   24633 provision.go:143] copyHostCerts
	I0906 18:53:06.198091   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:53:06.198130   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 18:53:06.198142   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:53:06.198219   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 18:53:06.198312   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:53:06.198336   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 18:53:06.198344   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:53:06.198383   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 18:53:06.198448   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:53:06.198471   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 18:53:06.198479   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:53:06.198517   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 18:53:06.198594   24633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128-m03 san=[127.0.0.1 192.168.39.172 ha-313128-m03 localhost minikube]
	I0906 18:53:06.364914   24633 provision.go:177] copyRemoteCerts
	I0906 18:53:06.364978   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:53:06.365007   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.367341   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.367666   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.367692   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.367850   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.368022   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.368164   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.368284   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:53:06.451510   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 18:53:06.451589   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:53:06.478096   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 18:53:06.478160   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:53:06.503688   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 18:53:06.503768   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 18:53:06.528822   24633 provision.go:87] duration metric: took 336.96118ms to configureAuth
	I0906 18:53:06.528850   24633 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:53:06.529126   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:53:06.529201   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.532385   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.532849   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.532900   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.533143   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.533361   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.533530   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.533673   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.533855   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:06.534077   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:06.534093   24633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 18:53:06.756664   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 18:53:06.756686   24633 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:53:06.756694   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetURL
	I0906 18:53:06.757884   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Using libvirt version 6000000
	I0906 18:53:06.760136   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.760546   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.760584   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.760740   24633 main.go:141] libmachine: Docker is up and running!
	I0906 18:53:06.760758   24633 main.go:141] libmachine: Reticulating splines...
	I0906 18:53:06.760765   24633 client.go:171] duration metric: took 24.353288857s to LocalClient.Create
	I0906 18:53:06.760784   24633 start.go:167] duration metric: took 24.353355904s to libmachine.API.Create "ha-313128"
	I0906 18:53:06.760793   24633 start.go:293] postStartSetup for "ha-313128-m03" (driver="kvm2")
	I0906 18:53:06.760803   24633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:53:06.760819   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:06.761062   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:53:06.761085   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.763644   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.763985   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.764012   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.764192   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.764397   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.764578   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.764735   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:53:06.847844   24633 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:53:06.852718   24633 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:53:06.852747   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 18:53:06.852822   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 18:53:06.852936   24633 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 18:53:06.852952   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 18:53:06.853048   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 18:53:06.863393   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:53:06.888736   24633 start.go:296] duration metric: took 127.929369ms for postStartSetup
	I0906 18:53:06.888797   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetConfigRaw
	I0906 18:53:06.889451   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:53:06.892071   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.892487   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.892514   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.892825   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:53:06.893247   24633 start.go:128] duration metric: took 24.504277174s to createHost
	I0906 18:53:06.893274   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.895395   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.895728   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.895757   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.895895   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.896083   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.896245   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.896375   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.896551   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:06.896748   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:06.896761   24633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:53:07.001946   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725648786.975718563
	
	I0906 18:53:07.001969   24633 fix.go:216] guest clock: 1725648786.975718563
	I0906 18:53:07.001979   24633 fix.go:229] Guest: 2024-09-06 18:53:06.975718563 +0000 UTC Remote: 2024-09-06 18:53:06.893261539 +0000 UTC m=+144.685491150 (delta=82.457024ms)
	I0906 18:53:07.002009   24633 fix.go:200] guest clock delta is within tolerance: 82.457024ms
	I0906 18:53:07.002019   24633 start.go:83] releasing machines lock for "ha-313128-m03", held for 24.613186073s
	I0906 18:53:07.002047   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:07.002365   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:53:07.005201   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.005588   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:07.005613   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.008039   24633 out.go:177] * Found network options:
	I0906 18:53:07.009756   24633 out.go:177]   - NO_PROXY=192.168.39.70,192.168.39.32
	W0906 18:53:07.011035   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 18:53:07.011064   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 18:53:07.011082   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:07.011707   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:07.011907   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:07.012004   24633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:53:07.012042   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	W0906 18:53:07.012101   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 18:53:07.012135   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 18:53:07.012207   24633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 18:53:07.012227   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:07.014979   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.015007   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.015430   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:07.015460   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.015493   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:07.015509   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.015580   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:07.015776   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:07.015786   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:07.015963   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:07.015965   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:07.016126   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:07.016150   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:53:07.016272   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:53:07.248808   24633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:53:07.255444   24633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:53:07.255518   24633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:53:07.272358   24633 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:53:07.272381   24633 start.go:495] detecting cgroup driver to use...
	I0906 18:53:07.272447   24633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:53:07.290268   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:53:07.305250   24633 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:53:07.305302   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:53:07.320102   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:53:07.334587   24633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:53:07.451557   24633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:53:07.626596   24633 docker.go:233] disabling docker service ...
	I0906 18:53:07.626675   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:53:07.641115   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:53:07.654454   24633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:53:07.779657   24633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:53:07.902355   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:53:07.917720   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:53:07.938374   24633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 18:53:07.938439   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:07.952230   24633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 18:53:07.952305   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:07.963927   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:07.974677   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:07.985298   24633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:53:07.996651   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:08.008163   24633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:08.026528   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:08.038498   24633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:53:08.048748   24633 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 18:53:08.048803   24633 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 18:53:08.063095   24633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:53:08.073574   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:53:08.193677   24633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 18:53:08.285533   24633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 18:53:08.285606   24633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 18:53:08.290429   24633 start.go:563] Will wait 60s for crictl version
	I0906 18:53:08.290477   24633 ssh_runner.go:195] Run: which crictl
	I0906 18:53:08.294356   24633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:53:08.336784   24633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 18:53:08.336886   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:53:08.367015   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:53:08.398051   24633 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 18:53:08.399358   24633 out.go:177]   - env NO_PROXY=192.168.39.70
	I0906 18:53:08.400519   24633 out.go:177]   - env NO_PROXY=192.168.39.70,192.168.39.32
	I0906 18:53:08.401625   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:53:08.404166   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:08.404535   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:08.404568   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:08.404796   24633 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:53:08.409362   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:53:08.422176   24633 mustload.go:65] Loading cluster: ha-313128
	I0906 18:53:08.422434   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:53:08.422904   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:53:08.422950   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:08.438041   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43061
	I0906 18:53:08.438487   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:08.438895   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:53:08.438918   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:08.439253   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:08.439447   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:53:08.441079   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:53:08.441376   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:53:08.441417   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:08.456403   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0906 18:53:08.456802   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:08.457251   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:53:08.457276   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:08.457570   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:08.457784   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:53:08.457940   24633 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.172
	I0906 18:53:08.457952   24633 certs.go:194] generating shared ca certs ...
	I0906 18:53:08.457970   24633 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:53:08.458109   24633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 18:53:08.458167   24633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 18:53:08.458178   24633 certs.go:256] generating profile certs ...
	I0906 18:53:08.458252   24633 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 18:53:08.458277   24633 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.694b0ac9
	I0906 18:53:08.458291   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.694b0ac9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.32 192.168.39.172 192.168.39.254]
	I0906 18:53:08.593889   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.694b0ac9 ...
	I0906 18:53:08.593920   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.694b0ac9: {Name:mk6c999646e794fc171d59c7a727ee1ebb048cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:53:08.594082   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.694b0ac9 ...
	I0906 18:53:08.594098   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.694b0ac9: {Name:mkf8af5f6f963663c0d89938e375b153be71e632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:53:08.594168   24633 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.694b0ac9 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 18:53:08.594366   24633 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.694b0ac9 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 18:53:08.594542   24633 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 18:53:08.594560   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 18:53:08.594573   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 18:53:08.594583   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 18:53:08.594594   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 18:53:08.594604   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 18:53:08.594618   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 18:53:08.594630   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 18:53:08.594642   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 18:53:08.594701   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 18:53:08.594728   24633 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 18:53:08.594738   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:53:08.594761   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:53:08.594782   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:53:08.594803   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 18:53:08.594843   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:53:08.594870   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 18:53:08.594884   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:53:08.594897   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 18:53:08.594924   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:53:08.597892   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:53:08.598284   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:53:08.598315   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:53:08.598485   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:53:08.598669   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:53:08.598826   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:53:08.598966   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:53:08.677160   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 18:53:08.685381   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 18:53:08.698851   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 18:53:08.703117   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 18:53:08.714724   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 18:53:08.718905   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 18:53:08.730196   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 18:53:08.735506   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0906 18:53:08.747184   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 18:53:08.751582   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 18:53:08.766710   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 18:53:08.771975   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0906 18:53:08.784212   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:53:08.810871   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 18:53:08.835164   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:53:08.861587   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:53:08.890093   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0906 18:53:08.914755   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 18:53:08.940093   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:53:08.965346   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:53:08.990696   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 18:53:09.014557   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:53:09.038432   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 18:53:09.067245   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 18:53:09.085969   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 18:53:09.103587   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 18:53:09.120199   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0906 18:53:09.136565   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 18:53:09.152936   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0906 18:53:09.169676   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 18:53:09.187770   24633 ssh_runner.go:195] Run: openssl version
	I0906 18:53:09.194813   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 18:53:09.206893   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 18:53:09.211625   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 18:53:09.211675   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 18:53:09.217877   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 18:53:09.230586   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 18:53:09.242731   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 18:53:09.248136   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 18:53:09.248196   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 18:53:09.253804   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 18:53:09.264699   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:53:09.276149   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:53:09.280764   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:53:09.280826   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:53:09.287180   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:53:09.298443   24633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:53:09.302501   24633 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:53:09.302555   24633 kubeadm.go:934] updating node {m03 192.168.39.172 8443 v1.31.0 crio true true} ...
	I0906 18:53:09.302674   24633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:53:09.302708   24633 kube-vip.go:115] generating kube-vip config ...
	I0906 18:53:09.302752   24633 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 18:53:09.320671   24633 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 18:53:09.320729   24633 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 18:53:09.320806   24633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:53:09.330370   24633 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0906 18:53:09.330416   24633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0906 18:53:09.341121   24633 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0906 18:53:09.341155   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0906 18:53:09.341157   24633 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0906 18:53:09.341176   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0906 18:53:09.341125   24633 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0906 18:53:09.341248   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0906 18:53:09.341258   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0906 18:53:09.341248   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:53:09.351667   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0906 18:53:09.351709   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0906 18:53:09.351753   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0906 18:53:09.351790   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0906 18:53:09.369263   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0906 18:53:09.369381   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0906 18:53:09.466522   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0906 18:53:09.466572   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0906 18:53:10.264236   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 18:53:10.274564   24633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0906 18:53:10.292286   24633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:53:10.310162   24633 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0906 18:53:10.326710   24633 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 18:53:10.331644   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:53:10.344416   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:53:10.466981   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:53:10.485074   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:53:10.485589   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:53:10.485644   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:10.502221   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0906 18:53:10.502686   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:10.503245   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:53:10.503273   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:10.503719   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:10.503926   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:53:10.504110   24633 start.go:317] joinCluster: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.16
8.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fal
se kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:53:10.504240   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0906 18:53:10.504262   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:53:10.507441   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:53:10.507895   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:53:10.507926   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:53:10.508063   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:53:10.508262   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:53:10.508390   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:53:10.508527   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:53:10.660452   24633 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:53:10.660499   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yxaleg.cfeauffnnk9lcyg0 --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-313128-m03 --control-plane --apiserver-advertise-address=192.168.39.172 --apiserver-bind-port=8443"
	I0906 18:53:41.526231   24633 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yxaleg.cfeauffnnk9lcyg0 --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-313128-m03 --control-plane --apiserver-advertise-address=192.168.39.172 --apiserver-bind-port=8443": (30.86570375s)
	I0906 18:53:41.526267   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0906 18:53:42.178453   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-313128-m03 minikube.k8s.io/updated_at=2024_09_06T18_53_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=ha-313128 minikube.k8s.io/primary=false
	I0906 18:53:42.313143   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-313128-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0906 18:53:42.438891   24633 start.go:319] duration metric: took 31.934778083s to joinCluster
	I0906 18:53:42.438982   24633 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:53:42.439381   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:53:42.440154   24633 out.go:177] * Verifying Kubernetes components...
	I0906 18:53:42.441171   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:53:42.775930   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:53:42.811766   24633 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:53:42.812301   24633 kapi.go:59] client config for ha-313128: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt", KeyFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key", CAFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 18:53:42.812480   24633 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.70:8443
	I0906 18:53:42.812776   24633 node_ready.go:35] waiting up to 6m0s for node "ha-313128-m03" to be "Ready" ...
	I0906 18:53:42.812881   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:42.812892   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:42.812903   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:42.812912   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:42.816347   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:43.313894   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:43.313920   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:43.313931   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:43.313940   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:43.317699   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:43.813704   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:43.813726   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:43.813734   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:43.813738   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:43.817359   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:44.313031   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:44.313052   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:44.313060   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:44.313064   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:44.316285   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:44.813055   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:44.813080   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:44.813089   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:44.813094   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:44.816444   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:44.817198   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:45.312995   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:45.313037   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:45.313047   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:45.313052   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:45.316807   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:45.813870   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:45.813898   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:45.813909   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:45.813914   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:45.817869   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:46.313985   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:46.314011   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:46.314024   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:46.314032   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:46.317099   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:46.813053   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:46.813079   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:46.813092   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:46.813099   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:46.822959   24633 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0906 18:53:46.823418   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:47.313752   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:47.313772   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:47.313780   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:47.313784   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:47.316959   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:47.813930   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:47.813953   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:47.813965   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:47.813972   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:47.817642   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:48.313980   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:48.314004   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:48.314012   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:48.314015   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:48.317443   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:48.812994   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:48.813026   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:48.813035   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:48.813039   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:48.816141   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:49.313677   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:49.313701   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:49.313711   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:49.313717   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:49.318967   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:53:49.319464   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:49.813866   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:49.813889   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:49.813897   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:49.813901   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:49.816921   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:50.313853   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:50.313875   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:50.313882   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:50.313887   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:50.317260   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:50.813959   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:50.813998   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:50.814007   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:50.814011   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:50.817199   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:51.313011   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:51.313039   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:51.313047   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:51.313052   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:51.316841   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:51.814002   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:51.814028   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:51.814038   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:51.814044   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:51.817528   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:51.818454   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:52.313022   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:52.313046   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:52.313058   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:52.313064   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:52.316557   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:52.813552   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:52.813578   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:52.813590   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:52.813596   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:52.816773   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:53.313033   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:53.313056   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:53.313064   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:53.313067   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:53.316654   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:53.813671   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:53.813691   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:53.813699   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:53.813703   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:53.816712   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:54.313933   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:54.313956   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:54.313964   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:54.313968   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:54.317619   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:54.318574   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:54.813972   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:54.813994   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:54.814002   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:54.814012   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:54.817704   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:55.313028   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:55.313051   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:55.313059   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:55.313065   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:55.316670   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:55.813769   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:55.813792   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:55.813800   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:55.813804   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:55.817218   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:56.313025   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:56.313054   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:56.313064   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:56.313068   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:56.316489   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:56.813331   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:56.813353   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:56.813363   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:56.813368   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:56.816700   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:56.817403   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:57.313949   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:57.313973   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.313983   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.313989   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.327439   24633 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0906 18:53:57.328358   24633 node_ready.go:49] node "ha-313128-m03" has status "Ready":"True"
	I0906 18:53:57.328378   24633 node_ready.go:38] duration metric: took 14.515582635s for node "ha-313128-m03" to be "Ready" ...
	I0906 18:53:57.328389   24633 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:53:57.328477   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:53:57.328488   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.328498   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.328503   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.335604   24633 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 18:53:57.342737   24633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.342809   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-gccvh
	I0906 18:53:57.342815   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.342825   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.342831   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.345862   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:57.346611   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:57.346627   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.346634   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.346639   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.349258   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.349714   24633 pod_ready.go:93] pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:57.349733   24633 pod_ready.go:82] duration metric: took 6.974302ms for pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.349744   24633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.349805   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-gk28z
	I0906 18:53:57.349815   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.349825   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.349832   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.352547   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.353211   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:57.353233   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.353244   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.353251   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.355705   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.356421   24633 pod_ready.go:93] pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:57.356441   24633 pod_ready.go:82] duration metric: took 6.689336ms for pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.356453   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.356510   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128
	I0906 18:53:57.356521   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.356533   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.356542   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.359039   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.359573   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:57.359590   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.359599   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.359603   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.362106   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.362720   24633 pod_ready.go:93] pod "etcd-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:57.362737   24633 pod_ready.go:82] duration metric: took 6.276937ms for pod "etcd-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.362747   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.362796   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:53:57.362806   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.362815   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.362826   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.369660   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:53:57.370162   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:53:57.370177   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.370186   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.370191   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.372802   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.373457   24633 pod_ready.go:93] pod "etcd-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:57.373480   24633 pod_ready.go:82] duration metric: took 10.722895ms for pod "etcd-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.373492   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.514895   24633 request.go:632] Waited for 141.339391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m03
	I0906 18:53:57.514968   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m03
	I0906 18:53:57.514976   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.514985   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.514993   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.518559   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:57.714441   24633 request.go:632] Waited for 195.349087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:57.714504   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:57.714512   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.714522   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.714527   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.717936   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:57.914384   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m03
	I0906 18:53:57.914409   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.914419   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.914426   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.918369   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.114393   24633 request.go:632] Waited for 195.358749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:58.114452   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:58.114457   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.114464   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.114469   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.117810   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.374575   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m03
	I0906 18:53:58.374600   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.374609   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.374616   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.378690   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:53:58.514368   24633 request.go:632] Waited for 134.771045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:58.514438   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:58.514449   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.514459   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.514471   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.518091   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.518697   24633 pod_ready.go:93] pod "etcd-ha-313128-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:58.518712   24633 pod_ready.go:82] duration metric: took 1.145213644s for pod "etcd-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:58.518732   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:58.714005   24633 request.go:632] Waited for 195.202478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128
	I0906 18:53:58.714095   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128
	I0906 18:53:58.714103   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.714117   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.714129   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.717314   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.914271   24633 request.go:632] Waited for 196.153535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:58.914335   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:58.914344   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.914358   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.914366   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.917837   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.918643   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:58.918660   24633 pod_ready.go:82] duration metric: took 399.921214ms for pod "kube-apiserver-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:58.918669   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.114766   24633 request.go:632] Waited for 196.017542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m02
	I0906 18:53:59.114831   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m02
	I0906 18:53:59.114839   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.114852   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.114860   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.118605   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:59.314628   24633 request.go:632] Waited for 195.357248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:53:59.314681   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:53:59.314687   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.314696   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.314708   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.317819   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:59.318398   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:59.318414   24633 pod_ready.go:82] duration metric: took 399.739323ms for pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.318426   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.514622   24633 request.go:632] Waited for 196.133616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m03
	I0906 18:53:59.514701   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m03
	I0906 18:53:59.514707   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.514715   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.514719   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.518088   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:59.714940   24633 request.go:632] Waited for 196.072496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:59.714999   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:59.715005   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.715012   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.715016   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.717813   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:59.718565   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:59.718584   24633 pod_ready.go:82] duration metric: took 400.146943ms for pod "kube-apiserver-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.718598   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.914728   24633 request.go:632] Waited for 196.064081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128
	I0906 18:53:59.914800   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128
	I0906 18:53:59.914805   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.914813   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.914821   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.918524   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.114637   24633 request.go:632] Waited for 195.373041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:00.114703   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:00.114710   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.114721   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.114729   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.118047   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.118811   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:00.118830   24633 pod_ready.go:82] duration metric: took 400.22454ms for pod "kube-controller-manager-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.118840   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.314834   24633 request.go:632] Waited for 195.917876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m02
	I0906 18:54:00.314899   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m02
	I0906 18:54:00.314906   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.314916   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.314926   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.318082   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.514111   24633 request.go:632] Waited for 195.120873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:00.514172   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:00.514179   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.514197   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.514205   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.517491   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.518099   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:00.518116   24633 pod_ready.go:82] duration metric: took 399.268736ms for pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.518126   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.714447   24633 request.go:632] Waited for 196.253088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m03
	I0906 18:54:00.714544   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m03
	I0906 18:54:00.714551   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.714565   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.714575   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.718114   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.914418   24633 request.go:632] Waited for 195.377075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:00.914483   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:00.914491   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.914500   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.914509   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.917901   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.918649   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:00.918671   24633 pod_ready.go:82] duration metric: took 400.537166ms for pod "kube-controller-manager-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.918682   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gfjr7" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.114917   24633 request.go:632] Waited for 196.159274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gfjr7
	I0906 18:54:01.114989   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gfjr7
	I0906 18:54:01.114996   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.115007   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.115016   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.118521   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:01.314588   24633 request.go:632] Waited for 195.358728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:01.314668   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:01.314675   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.314682   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.314686   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.318029   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:01.318673   24633 pod_ready.go:93] pod "kube-proxy-gfjr7" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:01.318691   24633 pod_ready.go:82] duration metric: took 400.003139ms for pod "kube-proxy-gfjr7" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.318701   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h5xn7" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.514801   24633 request.go:632] Waited for 196.042574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5xn7
	I0906 18:54:01.514855   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5xn7
	I0906 18:54:01.514866   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.514885   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.514891   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.518511   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:01.714537   24633 request.go:632] Waited for 195.332709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:01.714602   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:01.714609   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.714620   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.714626   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.717898   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:01.718416   24633 pod_ready.go:93] pod "kube-proxy-h5xn7" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:01.718434   24633 pod_ready.go:82] duration metric: took 399.727356ms for pod "kube-proxy-h5xn7" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.718446   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xjp6p" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.914543   24633 request.go:632] Waited for 196.020945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjp6p
	I0906 18:54:01.914611   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjp6p
	I0906 18:54:01.914617   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.914624   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.914629   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.918372   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.114514   24633 request.go:632] Waited for 195.35283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:02.114587   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:02.114593   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.114600   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.114604   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.118050   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.118591   24633 pod_ready.go:93] pod "kube-proxy-xjp6p" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:02.118610   24633 pod_ready.go:82] duration metric: took 400.155611ms for pod "kube-proxy-xjp6p" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.118620   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.313968   24633 request.go:632] Waited for 195.283751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128
	I0906 18:54:02.314056   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128
	I0906 18:54:02.314065   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.314077   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.314091   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.317646   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.514144   24633 request.go:632] Waited for 195.801776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:02.514208   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:02.514214   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.514221   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.514226   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.517249   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:54:02.517938   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:02.517955   24633 pod_ready.go:82] duration metric: took 399.328108ms for pod "kube-scheduler-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.517964   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.714164   24633 request.go:632] Waited for 196.128114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m02
	I0906 18:54:02.714243   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m02
	I0906 18:54:02.714253   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.714264   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.714274   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.717794   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.914697   24633 request.go:632] Waited for 196.291724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:02.914751   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:02.914759   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.914768   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.914779   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.918615   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.919354   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:02.919370   24633 pod_ready.go:82] duration metric: took 401.399291ms for pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.919381   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:03.114558   24633 request.go:632] Waited for 195.096741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m03
	I0906 18:54:03.114639   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m03
	I0906 18:54:03.114653   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.114665   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.114676   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.117825   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:03.314865   24633 request.go:632] Waited for 196.35431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:03.314945   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:03.314951   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.314958   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.314962   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.318254   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:03.318931   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:03.318948   24633 pod_ready.go:82] duration metric: took 399.560197ms for pod "kube-scheduler-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:03.318958   24633 pod_ready.go:39] duration metric: took 5.990557854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:54:03.318972   24633 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:54:03.319025   24633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:54:03.334503   24633 api_server.go:72] duration metric: took 20.895485689s to wait for apiserver process to appear ...
	I0906 18:54:03.334523   24633 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:54:03.334540   24633 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0906 18:54:03.340935   24633 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I0906 18:54:03.341012   24633 round_trippers.go:463] GET https://192.168.39.70:8443/version
	I0906 18:54:03.341023   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.341034   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.341043   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.341830   24633 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 18:54:03.341912   24633 api_server.go:141] control plane version: v1.31.0
	I0906 18:54:03.341930   24633 api_server.go:131] duration metric: took 7.401121ms to wait for apiserver health ...
	I0906 18:54:03.341940   24633 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:54:03.514101   24633 request.go:632] Waited for 172.091152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:54:03.514158   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:54:03.514164   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.514172   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.514175   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.520237   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:54:03.526898   24633 system_pods.go:59] 24 kube-system pods found
	I0906 18:54:03.526925   24633 system_pods.go:61] "coredns-6f6b679f8f-gccvh" [9b7c0e1a-3359-4f9f-826c-b75cbdfcd500] Running
	I0906 18:54:03.526931   24633 system_pods.go:61] "coredns-6f6b679f8f-gk28z" [ab595ef6-eaa8-44a0-bdad-ddd59c8d052d] Running
	I0906 18:54:03.526935   24633 system_pods.go:61] "etcd-ha-313128" [b4550b86-3359-44e6-a495-7db003a1bb95] Running
	I0906 18:54:03.526939   24633 system_pods.go:61] "etcd-ha-313128-m02" [d42fd6b2-4ecd-49e8-b5b2-5b29fabe2d1e] Running
	I0906 18:54:03.526943   24633 system_pods.go:61] "etcd-ha-313128-m03" [389e0f5d-34fa-40ff-bba5-079485a68d04] Running
	I0906 18:54:03.526946   24633 system_pods.go:61] "kindnet-h2trt" [90af3550-1fae-46bd-9329-f185fcdb23c6] Running
	I0906 18:54:03.526949   24633 system_pods.go:61] "kindnet-jl257" [0c8c46d5-9a1f-40c6-823e-3e0afca658c5] Running
	I0906 18:54:03.526953   24633 system_pods.go:61] "kindnet-t65ls" [657498aa-b76e-4eb2-abbe-5d8a050fc415] Running
	I0906 18:54:03.526958   24633 system_pods.go:61] "kube-apiserver-ha-313128" [081ff647-e9c5-4cce-895a-e5e660db1acc] Running
	I0906 18:54:03.526960   24633 system_pods.go:61] "kube-apiserver-ha-313128-m02" [bed2808b-bef1-4fd4-a811-762a5ff46343] Running
	I0906 18:54:03.526966   24633 system_pods.go:61] "kube-apiserver-ha-313128-m03" [df855b79-c920-42c5-a8c2-d4d97c4d0fed] Running
	I0906 18:54:03.526970   24633 system_pods.go:61] "kube-controller-manager-ha-313128" [ba0308a5-06d8-468b-a3a1-e95a28a52dd7] Running
	I0906 18:54:03.526975   24633 system_pods.go:61] "kube-controller-manager-ha-313128-m02" [3f4032ce-c8b4-4a2c-9384-82d5d5ec0874] Running
	I0906 18:54:03.526979   24633 system_pods.go:61] "kube-controller-manager-ha-313128-m03" [4f975f72-075c-43dd-b104-bdf5172f45ed] Running
	I0906 18:54:03.526985   24633 system_pods.go:61] "kube-proxy-gfjr7" [2fb5a899-48c8-4e96-ac8e-b77570ecaf26] Running
	I0906 18:54:03.526989   24633 system_pods.go:61] "kube-proxy-h5xn7" [e45358c5-398e-4d33-9bd0-a4f28ce17ac9] Running
	I0906 18:54:03.526994   24633 system_pods.go:61] "kube-proxy-xjp6p" [0cbbf003-361c-441e-a2fe-18783999b020] Running
	I0906 18:54:03.526998   24633 system_pods.go:61] "kube-scheduler-ha-313128" [8580599b-125e-4a2f-9019-41b305c0f611] Running
	I0906 18:54:03.527001   24633 system_pods.go:61] "kube-scheduler-ha-313128-m02" [81cb0c5f-7e54-4e8c-b089-d6a4e2c9cbf0] Running
	I0906 18:54:03.527005   24633 system_pods.go:61] "kube-scheduler-ha-313128-m03" [a49687b2-124f-49c7-abfe-5e401ebabc1f] Running
	I0906 18:54:03.527009   24633 system_pods.go:61] "kube-vip-ha-313128" [6e270949-38fe-475f-b902-ede9d2cb795f] Running
	I0906 18:54:03.527012   24633 system_pods.go:61] "kube-vip-ha-313128-m02" [949996a0-0ce0-4ce4-b9ec-86c8f35a4a96] Running
	I0906 18:54:03.527017   24633 system_pods.go:61] "kube-vip-ha-313128-m03" [867dc2d0-034e-45d9-b3c2-72179e58597e] Running
	I0906 18:54:03.527021   24633 system_pods.go:61] "storage-provisioner" [6c957eac-7904-4c39-b858-bfb7da32c75c] Running
	I0906 18:54:03.527029   24633 system_pods.go:74] duration metric: took 185.079358ms to wait for pod list to return data ...
	I0906 18:54:03.527037   24633 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:54:03.714476   24633 request.go:632] Waited for 187.354456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0906 18:54:03.714532   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0906 18:54:03.714538   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.714552   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.714560   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.719117   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:54:03.719260   24633 default_sa.go:45] found service account: "default"
	I0906 18:54:03.719283   24633 default_sa.go:55] duration metric: took 192.237231ms for default service account to be created ...
	I0906 18:54:03.719295   24633 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:54:03.914779   24633 request.go:632] Waited for 195.388568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:54:03.914859   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:54:03.914870   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.914881   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.914890   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.921370   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:54:03.930987   24633 system_pods.go:86] 24 kube-system pods found
	I0906 18:54:03.931020   24633 system_pods.go:89] "coredns-6f6b679f8f-gccvh" [9b7c0e1a-3359-4f9f-826c-b75cbdfcd500] Running
	I0906 18:54:03.931027   24633 system_pods.go:89] "coredns-6f6b679f8f-gk28z" [ab595ef6-eaa8-44a0-bdad-ddd59c8d052d] Running
	I0906 18:54:03.931031   24633 system_pods.go:89] "etcd-ha-313128" [b4550b86-3359-44e6-a495-7db003a1bb95] Running
	I0906 18:54:03.931035   24633 system_pods.go:89] "etcd-ha-313128-m02" [d42fd6b2-4ecd-49e8-b5b2-5b29fabe2d1e] Running
	I0906 18:54:03.931039   24633 system_pods.go:89] "etcd-ha-313128-m03" [389e0f5d-34fa-40ff-bba5-079485a68d04] Running
	I0906 18:54:03.931043   24633 system_pods.go:89] "kindnet-h2trt" [90af3550-1fae-46bd-9329-f185fcdb23c6] Running
	I0906 18:54:03.931046   24633 system_pods.go:89] "kindnet-jl257" [0c8c46d5-9a1f-40c6-823e-3e0afca658c5] Running
	I0906 18:54:03.931050   24633 system_pods.go:89] "kindnet-t65ls" [657498aa-b76e-4eb2-abbe-5d8a050fc415] Running
	I0906 18:54:03.931059   24633 system_pods.go:89] "kube-apiserver-ha-313128" [081ff647-e9c5-4cce-895a-e5e660db1acc] Running
	I0906 18:54:03.931064   24633 system_pods.go:89] "kube-apiserver-ha-313128-m02" [bed2808b-bef1-4fd4-a811-762a5ff46343] Running
	I0906 18:54:03.931069   24633 system_pods.go:89] "kube-apiserver-ha-313128-m03" [df855b79-c920-42c5-a8c2-d4d97c4d0fed] Running
	I0906 18:54:03.931076   24633 system_pods.go:89] "kube-controller-manager-ha-313128" [ba0308a5-06d8-468b-a3a1-e95a28a52dd7] Running
	I0906 18:54:03.931082   24633 system_pods.go:89] "kube-controller-manager-ha-313128-m02" [3f4032ce-c8b4-4a2c-9384-82d5d5ec0874] Running
	I0906 18:54:03.931087   24633 system_pods.go:89] "kube-controller-manager-ha-313128-m03" [4f975f72-075c-43dd-b104-bdf5172f45ed] Running
	I0906 18:54:03.931097   24633 system_pods.go:89] "kube-proxy-gfjr7" [2fb5a899-48c8-4e96-ac8e-b77570ecaf26] Running
	I0906 18:54:03.931102   24633 system_pods.go:89] "kube-proxy-h5xn7" [e45358c5-398e-4d33-9bd0-a4f28ce17ac9] Running
	I0906 18:54:03.931106   24633 system_pods.go:89] "kube-proxy-xjp6p" [0cbbf003-361c-441e-a2fe-18783999b020] Running
	I0906 18:54:03.931111   24633 system_pods.go:89] "kube-scheduler-ha-313128" [8580599b-125e-4a2f-9019-41b305c0f611] Running
	I0906 18:54:03.931118   24633 system_pods.go:89] "kube-scheduler-ha-313128-m02" [81cb0c5f-7e54-4e8c-b089-d6a4e2c9cbf0] Running
	I0906 18:54:03.931121   24633 system_pods.go:89] "kube-scheduler-ha-313128-m03" [a49687b2-124f-49c7-abfe-5e401ebabc1f] Running
	I0906 18:54:03.931127   24633 system_pods.go:89] "kube-vip-ha-313128" [6e270949-38fe-475f-b902-ede9d2cb795f] Running
	I0906 18:54:03.931131   24633 system_pods.go:89] "kube-vip-ha-313128-m02" [949996a0-0ce0-4ce4-b9ec-86c8f35a4a96] Running
	I0906 18:54:03.931139   24633 system_pods.go:89] "kube-vip-ha-313128-m03" [867dc2d0-034e-45d9-b3c2-72179e58597e] Running
	I0906 18:54:03.931147   24633 system_pods.go:89] "storage-provisioner" [6c957eac-7904-4c39-b858-bfb7da32c75c] Running
	I0906 18:54:03.931155   24633 system_pods.go:126] duration metric: took 211.85328ms to wait for k8s-apps to be running ...
	I0906 18:54:03.931167   24633 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:54:03.931222   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:54:03.948768   24633 system_svc.go:56] duration metric: took 17.590976ms WaitForService to wait for kubelet
	I0906 18:54:03.948803   24633 kubeadm.go:582] duration metric: took 21.509787394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:54:03.948831   24633 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:54:04.114236   24633 request.go:632] Waited for 165.302052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes
	I0906 18:54:04.114297   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0906 18:54:04.114303   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:04.114310   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:04.114313   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:04.118103   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:04.119134   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:54:04.119155   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:54:04.119171   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:54:04.119174   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:54:04.119178   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:54:04.119181   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:54:04.119186   24633 node_conditions.go:105] duration metric: took 170.348782ms to run NodePressure ...
	I0906 18:54:04.119199   24633 start.go:241] waiting for startup goroutines ...
	I0906 18:54:04.119227   24633 start.go:255] writing updated cluster config ...
	I0906 18:54:04.119521   24633 ssh_runner.go:195] Run: rm -f paused
	I0906 18:54:04.170894   24633 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 18:54:04.173352   24633 out.go:177] * Done! kubectl is now configured to use "ha-313128" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.789047931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649058789014608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=301d1bff-d03e-4e65-8bcc-054b55c6e4e3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.789685106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e02c1d54-898b-4a89-ba04-4eeff39c0643 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.789774403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e02c1d54-898b-4a89-ba04-4eeff39c0643 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.790128808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725648847674894407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704565865782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d,PodSandboxId:b08178bcf1de75f873c948d3e6641dc5d0ae48e4b5420eebfad85d8caabda791,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725648704521266152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704439858159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-33
59-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725648692553241731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172564869
0396327444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d,PodSandboxId:68a537b5386bf0dc2a954b946f2a376eea0d8d10ec6e3b2ab4c6e6f1f7dbebd8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172564868078
7067794,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4632629df72b4c4f23c3be823465189,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f,PodSandboxId:b9f62786c7a95e9ef333ad31c2626202c9a1de9167e00facaad0a995ca9f4799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725648679066355632,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387,PodSandboxId:9fee72e04c13707f52815f360e30c0db2e46b810e8ce54b184507a5ce3f1d06d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725648679036550947,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725648678969341073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725648678980034761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e02c1d54-898b-4a89-ba04-4eeff39c0643 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.843464409Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=871ad76f-12ad-4490-a716-81070f3e1714 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.843623569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=871ad76f-12ad-4490-a716-81070f3e1714 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.845468772Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13249dfb-a6a7-47c9-9c18-ccd1b8c24dc7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.846205873Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649058846158499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13249dfb-a6a7-47c9-9c18-ccd1b8c24dc7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.846989468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea2ba95b-2e31-4d6d-a743-ca1012f72430 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.847080858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea2ba95b-2e31-4d6d-a743-ca1012f72430 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.847460693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725648847674894407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704565865782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d,PodSandboxId:b08178bcf1de75f873c948d3e6641dc5d0ae48e4b5420eebfad85d8caabda791,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725648704521266152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704439858159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-33
59-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725648692553241731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172564869
0396327444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d,PodSandboxId:68a537b5386bf0dc2a954b946f2a376eea0d8d10ec6e3b2ab4c6e6f1f7dbebd8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172564868078
7067794,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4632629df72b4c4f23c3be823465189,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f,PodSandboxId:b9f62786c7a95e9ef333ad31c2626202c9a1de9167e00facaad0a995ca9f4799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725648679066355632,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387,PodSandboxId:9fee72e04c13707f52815f360e30c0db2e46b810e8ce54b184507a5ce3f1d06d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725648679036550947,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725648678969341073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725648678980034761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea2ba95b-2e31-4d6d-a743-ca1012f72430 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.896626052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d167dc6b-21f3-4388-8841-81a061b1694a name=/runtime.v1.RuntimeService/Version
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.896700979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d167dc6b-21f3-4388-8841-81a061b1694a name=/runtime.v1.RuntimeService/Version
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.898729366Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db6d5c07-6cf3-424f-8aa8-0e3c9e3c54d8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.899175128Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649058899148474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db6d5c07-6cf3-424f-8aa8-0e3c9e3c54d8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.899901028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02f45c55-1886-4a3b-8329-d9ac1835a7ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.899978860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02f45c55-1886-4a3b-8329-d9ac1835a7ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.900262252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725648847674894407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704565865782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d,PodSandboxId:b08178bcf1de75f873c948d3e6641dc5d0ae48e4b5420eebfad85d8caabda791,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725648704521266152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704439858159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-33
59-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725648692553241731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172564869
0396327444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d,PodSandboxId:68a537b5386bf0dc2a954b946f2a376eea0d8d10ec6e3b2ab4c6e6f1f7dbebd8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172564868078
7067794,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4632629df72b4c4f23c3be823465189,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f,PodSandboxId:b9f62786c7a95e9ef333ad31c2626202c9a1de9167e00facaad0a995ca9f4799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725648679066355632,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387,PodSandboxId:9fee72e04c13707f52815f360e30c0db2e46b810e8ce54b184507a5ce3f1d06d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725648679036550947,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725648678969341073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725648678980034761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02f45c55-1886-4a3b-8329-d9ac1835a7ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.941592517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9faaba39-14ce-4a54-bc6d-226f0dd50134 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.941667176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9faaba39-14ce-4a54-bc6d-226f0dd50134 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.942676571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=650a78af-4050-4960-9db3-1826502730e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.943692980Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649058943217048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=650a78af-4050-4960-9db3-1826502730e0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.944530855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca02ebbd-fbe5-4a54-9c90-75cbc6a205f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.944595690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca02ebbd-fbe5-4a54-9c90-75cbc6a205f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:57:38 ha-313128 crio[668]: time="2024-09-06 18:57:38.945195170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725648847674894407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704565865782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d,PodSandboxId:b08178bcf1de75f873c948d3e6641dc5d0ae48e4b5420eebfad85d8caabda791,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725648704521266152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704439858159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-33
59-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725648692553241731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172564869
0396327444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d,PodSandboxId:68a537b5386bf0dc2a954b946f2a376eea0d8d10ec6e3b2ab4c6e6f1f7dbebd8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172564868078
7067794,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4632629df72b4c4f23c3be823465189,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f,PodSandboxId:b9f62786c7a95e9ef333ad31c2626202c9a1de9167e00facaad0a995ca9f4799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725648679066355632,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387,PodSandboxId:9fee72e04c13707f52815f360e30c0db2e46b810e8ce54b184507a5ce3f1d06d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725648679036550947,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725648678969341073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725648678980034761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca02ebbd-fbe5-4a54-9c90-75cbc6a205f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b3f2cd2f6c9c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   74b84ec8f17a7       busybox-7dff88458-s2cgz
	5b950806bc4b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   9151daea570f3       coredns-6f6b679f8f-gk28z
	ffd27ffbc9742       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   b08178bcf1de7       storage-provisioner
	76bbd732b8695       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   8449d8c8bfa3e       coredns-6f6b679f8f-gccvh
	76ca94f153009       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   a3128d8e090be       kindnet-h2trt
	135074e446370       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   dde7791c0770a       kube-proxy-h5xn7
	13b08e833a9ce       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   68a537b5386bf       kube-vip-ha-313128
	7f7c5c81b9e05       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   b9f62786c7a95       kube-controller-manager-ha-313128
	9a30d709b3b92       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   9fee72e04c137       kube-apiserver-ha-313128
	e32b22b9f83ac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   0ced27e2ded46       etcd-ha-313128
	a406aeec43303       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   aeb85ed29ab1d       kube-scheduler-ha-313128
	
	
	==> coredns [5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939] <==
	[INFO] 10.244.1.2:46138 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212814s
	[INFO] 10.244.1.2:37199 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142816s
	[INFO] 10.244.1.2:59435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134263s
	[INFO] 10.244.2.2:55641 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152123s
	[INFO] 10.244.2.2:44100 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142405s
	[INFO] 10.244.2.2:36497 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120125s
	[INFO] 10.244.2.2:48348 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089194s
	[INFO] 10.244.2.2:54108 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074315s
	[INFO] 10.244.0.4:40347 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182567s
	[INFO] 10.244.0.4:52272 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006329s
	[INFO] 10.244.0.4:51714 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082631s
	[INFO] 10.244.1.2:48124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011336s
	[INFO] 10.244.1.2:41760 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105711s
	[INFO] 10.244.2.2:36465 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146663s
	[INFO] 10.244.2.2:60287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114443s
	[INFO] 10.244.0.4:42561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009569s
	[INFO] 10.244.0.4:55114 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086084s
	[INFO] 10.244.0.4:53953 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067022s
	[INFO] 10.244.1.2:48594 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121564s
	[INFO] 10.244.1.2:53114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166914s
	[INFO] 10.244.2.2:34659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158468s
	[INFO] 10.244.2.2:34171 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176512s
	[INFO] 10.244.0.4:58990 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009694s
	[INFO] 10.244.0.4:43562 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118003s
	[INFO] 10.244.0.4:33609 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086781s
	
	
	==> coredns [76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa] <==
	[INFO] 10.244.2.2:49198 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000157417s
	[INFO] 10.244.2.2:45279 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001726225s
	[INFO] 10.244.0.4:43649 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000087809s
	[INFO] 10.244.0.4:48739 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001332361s
	[INFO] 10.244.1.2:58049 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00030226s
	[INFO] 10.244.1.2:40610 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.031276485s
	[INFO] 10.244.1.2:56981 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192216s
	[INFO] 10.244.2.2:34827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217382s
	[INFO] 10.244.2.2:57219 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001699092s
	[INFO] 10.244.2.2:58659 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001077242s
	[INFO] 10.244.0.4:54771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075932s
	[INFO] 10.244.0.4:36423 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001645163s
	[INFO] 10.244.0.4:44712 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063493s
	[INFO] 10.244.0.4:58952 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001116094s
	[INFO] 10.244.0.4:58673 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091919s
	[INFO] 10.244.1.2:35244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089298s
	[INFO] 10.244.1.2:54461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083864s
	[INFO] 10.244.2.2:46046 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126212s
	[INFO] 10.244.2.2:45762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078805s
	[INFO] 10.244.0.4:56166 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109081s
	[INFO] 10.244.1.2:44485 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175559s
	[INFO] 10.244.1.2:60331 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113433s
	[INFO] 10.244.2.2:33944 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094759s
	[INFO] 10.244.2.2:54249 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007626s
	[INFO] 10.244.0.4:34049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091783s
	
	
	==> describe nodes <==
	Name:               ha-313128
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_51_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:57:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:54:29 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:54:29 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:54:29 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:54:29 +0000   Fri, 06 Sep 2024 18:51:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-313128
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a8374058d8a4ce69ddf9d9b9a6bab88
	  System UUID:                5a837405-8d8a-4ce6-9ddf-9d9b9a6bab88
	  Boot ID:                    4ac8491f-e614-44c2-96e0-f1733bbe0f17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s2cgz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 coredns-6f6b679f8f-gccvh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 coredns-6f6b679f8f-gk28z             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 etcd-ha-313128                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m14s
	  kube-system                 kindnet-h2trt                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-313128             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-controller-manager-ha-313128    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-proxy-h5xn7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-313128             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-vip-ha-313128                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m8s   kube-proxy       
	  Normal  Starting                 6m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s  kubelet          Node ha-313128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s  kubelet          Node ha-313128 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s  kubelet          Node ha-313128 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal  NodeReady                5m56s  kubelet          Node ha-313128 status is now: NodeReady
	  Normal  RegisteredNode           5m11s  node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal  RegisteredNode           3m52s  node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	
	
	Name:               ha-313128-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_52_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:52:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:55:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 18:54:21 +0000   Fri, 06 Sep 2024 18:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 18:54:21 +0000   Fri, 06 Sep 2024 18:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 18:54:21 +0000   Fri, 06 Sep 2024 18:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 18:54:21 +0000   Fri, 06 Sep 2024 18:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    ha-313128-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9324a423f4b54997b7d3837f23afbaaf
	  System UUID:                9324a423-f4b5-4997-b7d3-837f23afbaaf
	  Boot ID:                    5b6464a0-918c-48fa-869b-49bf49ced3f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-54m66                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 etcd-ha-313128-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-t65ls                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-313128-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-313128-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-proxy-xjp6p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-313128-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-vip-ha-313128-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m22s)  kubelet          Node ha-313128-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m22s)  kubelet          Node ha-313128-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m22s)  kubelet          Node ha-313128-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-313128-m02 status is now: NodeNotReady
	
	
	Name:               ha-313128-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_53_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:53:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:57:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:54:09 +0000   Fri, 06 Sep 2024 18:53:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:54:09 +0000   Fri, 06 Sep 2024 18:53:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:54:09 +0000   Fri, 06 Sep 2024 18:53:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:54:09 +0000   Fri, 06 Sep 2024 18:53:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    ha-313128-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d33107b982c427ca47333d2971ade3a
	  System UUID:                1d33107b-982c-427c-a473-33d2971ade3a
	  Boot ID:                    b026d73c-eaf0-4a0e-9fe3-8e30ea0ed740
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k99v6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 etcd-ha-313128-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m59s
	  kube-system                 kindnet-jl257                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m1s
	  kube-system                 kube-apiserver-ha-313128-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-controller-manager-ha-313128-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-proxy-gfjr7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-scheduler-ha-313128-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 kube-vip-ha-313128-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m54s                kube-proxy       
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node ha-313128-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node ha-313128-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node ha-313128-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	  Normal  RegisteredNode           3m52s                node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	
	
	Name:               ha-313128-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_54_39_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:54:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:57:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 18:54:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 18:54:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 18:54:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 18:54:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-313128-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1284faaf1604a6db25bba3bb7ed5953
	  System UUID:                f1284faa-f160-4a6d-b25b-ba3bb7ed5953
	  Boot ID:                    25844c67-e2f9-444b-99b9-94b7e385f59f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fsbs9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-8tm7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m55s              kube-proxy       
	  Normal  NodeAllocatableEnforced  3m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)  kubelet          Node ha-313128-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)  kubelet          Node ha-313128-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)  kubelet          Node ha-313128-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m57s              node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           2m56s              node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           2m56s              node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  NodeReady                2m41s              kubelet          Node ha-313128-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 6 18:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050608] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040146] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.800760] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.489418] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.624819] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 18:51] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072122] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.201564] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.131661] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.284243] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.067260] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.541515] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.060417] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251462] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.088029] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.073110] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.070796] kauditd_printk_skb: 38 callbacks suppressed
	[Sep 6 18:52] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8] <==
	{"level":"warn","ts":"2024-09-06T18:57:38.958699Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.016029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.058169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.158143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.158232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.217853Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.225774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.229926Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.239231Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.250802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.257654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.258289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.263320Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.266761Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.272332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.285889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.291824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.295314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.298666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.306212Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.311669Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.317931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.321249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.324301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:57:39.382604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:57:39 up 6 min,  0 users,  load average: 0.10, 0.19, 0.11
	Linux ha-313128 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b] <==
	I0906 18:57:03.775275       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 18:57:13.775209       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 18:57:13.775325       1 main.go:299] handling current node
	I0906 18:57:13.775352       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 18:57:13.775360       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 18:57:13.775627       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 18:57:13.775661       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 18:57:13.775773       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 18:57:13.775805       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 18:57:23.776158       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 18:57:23.776308       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 18:57:23.776592       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 18:57:23.776675       1 main.go:299] handling current node
	I0906 18:57:23.776739       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 18:57:23.776759       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 18:57:23.776854       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 18:57:23.776874       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 18:57:33.769340       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 18:57:33.769562       1 main.go:299] handling current node
	I0906 18:57:33.769603       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 18:57:33.769632       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 18:57:33.769844       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 18:57:33.769883       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 18:57:33.769973       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 18:57:33.770000       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387] <==
	I0906 18:51:25.309945       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0906 18:51:25.457042       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 18:51:29.747411       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0906 18:51:29.810662       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0906 18:52:18.859356       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0906 18:52:18.859680       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 13.582µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0906 18:52:18.860826       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0906 18:52:18.862134       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0906 18:52:18.863594       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.388495ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0906 18:54:08.665104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47100: use of closed network connection
	E0906 18:54:08.848600       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47110: use of closed network connection
	E0906 18:54:09.036986       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47126: use of closed network connection
	E0906 18:54:09.270200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47144: use of closed network connection
	E0906 18:54:09.458294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47168: use of closed network connection
	E0906 18:54:09.652563       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47176: use of closed network connection
	E0906 18:54:09.835588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47200: use of closed network connection
	E0906 18:54:10.009376       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47218: use of closed network connection
	E0906 18:54:10.188152       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47226: use of closed network connection
	E0906 18:54:10.471964       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47262: use of closed network connection
	E0906 18:54:10.648030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47290: use of closed network connection
	E0906 18:54:10.836992       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47306: use of closed network connection
	E0906 18:54:11.006159       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47322: use of closed network connection
	E0906 18:54:11.181297       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47338: use of closed network connection
	E0906 18:54:11.366046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47362: use of closed network connection
	W0906 18:55:33.918550       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.172 192.168.39.70]
	
	
	==> kube-controller-manager [7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f] <==
	I0906 18:54:39.045844       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-313128-m04" podCIDRs=["10.244.3.0/24"]
	I0906 18:54:39.045912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:39.046233       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:39.066260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:39.338155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:39.726463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:42.699047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:43.120944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:43.221197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:44.209153       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-313128-m04"
	I0906 18:54:44.210914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:44.405082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:49.440885       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:58.169228       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-313128-m04"
	I0906 18:54:58.169562       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:58.189627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:59.226373       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:55:09.691672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:55:52.673339       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-313128-m04"
	I0906 18:55:52.673535       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 18:55:52.703026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 18:55:52.797314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.734822ms"
	I0906 18:55:52.797410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.591µs"
	I0906 18:55:54.272892       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 18:55:57.899341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	
	
	==> kube-proxy [135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:51:30.682674       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:51:30.696155       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.70"]
	E0906 18:51:30.696248       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:51:30.742708       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:51:30.742748       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:51:30.742776       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:51:30.746442       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:51:30.746885       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:51:30.747126       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:51:30.748722       1 config.go:197] "Starting service config controller"
	I0906 18:51:30.748777       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:51:30.748818       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:51:30.748834       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:51:30.756676       1 config.go:326] "Starting node config controller"
	I0906 18:51:30.756705       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:51:30.849938       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 18:51:30.850008       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:51:30.856862       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f] <==
	I0906 18:51:25.792122       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 18:53:38.426718       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jl257\": pod kindnet-jl257 is already assigned to node \"ha-313128-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-jl257" node="ha-313128-m03"
	E0906 18:53:38.426941       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jl257\": pod kindnet-jl257 is already assigned to node \"ha-313128-m03\"" pod="kube-system/kindnet-jl257"
	I0906 18:53:38.427016       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jl257" node="ha-313128-m03"
	E0906 18:53:38.516668       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ll952\": pod kindnet-ll952 is already assigned to node \"ha-313128-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-ll952" node="ha-313128-m03"
	E0906 18:53:38.516957       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 80638b6e-9eca-4abb-a3df-4b95fc931417(kube-system/kindnet-ll952) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ll952"
	E0906 18:53:38.517065       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ll952\": pod kindnet-ll952 is already assigned to node \"ha-313128-m03\"" pod="kube-system/kindnet-ll952"
	I0906 18:53:38.517106       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ll952" node="ha-313128-m03"
	E0906 18:54:05.046409       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k99v6\": pod busybox-7dff88458-k99v6 is already assigned to node \"ha-313128-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k99v6" node="ha-313128-m02"
	E0906 18:54:05.046651       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k99v6\": pod busybox-7dff88458-k99v6 is already assigned to node \"ha-313128-m03\"" pod="default/busybox-7dff88458-k99v6"
	E0906 18:54:05.096920       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-54m66\": pod busybox-7dff88458-54m66 is already assigned to node \"ha-313128-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-54m66" node="ha-313128-m02"
	E0906 18:54:05.096999       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7267943f-285a-4790-987f-7fac660585fc(default/busybox-7dff88458-54m66) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-54m66"
	E0906 18:54:05.097028       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-54m66\": pod busybox-7dff88458-54m66 is already assigned to node \"ha-313128-m02\"" pod="default/busybox-7dff88458-54m66"
	I0906 18:54:05.097080       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-54m66" node="ha-313128-m02"
	E0906 18:54:39.142976       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8tm7b\": pod kube-proxy-8tm7b is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8tm7b" node="ha-313128-m04"
	E0906 18:54:39.143233       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b3bf864c-151e-4cad-b312-6c93ea87e678(kube-system/kube-proxy-8tm7b) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8tm7b"
	E0906 18:54:39.143315       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8tm7b\": pod kube-proxy-8tm7b is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-8tm7b"
	I0906 18:54:39.143372       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8tm7b" node="ha-313128-m04"
	E0906 18:54:39.143180       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.144192       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fdc10711-7099-424e-885e-65589f5642e5(kube-system/kindnet-k9szn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k9szn"
	E0906 18:54:39.144252       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" pod="kube-system/kindnet-k9szn"
	I0906 18:54:39.144297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.236601       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	E0906 18:54:39.236925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-rnm78"
	I0906 18:54:39.240895       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	
	
	==> kubelet <==
	Sep 06 18:56:25 ha-313128 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 18:56:25 ha-313128 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 18:56:25 ha-313128 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 18:56:25 ha-313128 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 18:56:25 ha-313128 kubelet[1323]: E0906 18:56:25.556058    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648985554672835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:56:25 ha-313128 kubelet[1323]: E0906 18:56:25.556085    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648985554672835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:56:35 ha-313128 kubelet[1323]: E0906 18:56:35.557128    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648995556783682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:56:35 ha-313128 kubelet[1323]: E0906 18:56:35.557202    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725648995556783682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:56:45 ha-313128 kubelet[1323]: E0906 18:56:45.558879    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649005558457329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:56:45 ha-313128 kubelet[1323]: E0906 18:56:45.558914    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649005558457329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:56:55 ha-313128 kubelet[1323]: E0906 18:56:55.561112    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649015560118636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:56:55 ha-313128 kubelet[1323]: E0906 18:56:55.561144    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649015560118636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:05 ha-313128 kubelet[1323]: E0906 18:57:05.563050    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649025562451591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:05 ha-313128 kubelet[1323]: E0906 18:57:05.563599    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649025562451591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:15 ha-313128 kubelet[1323]: E0906 18:57:15.566210    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649035565799045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:15 ha-313128 kubelet[1323]: E0906 18:57:15.566252    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649035565799045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:25 ha-313128 kubelet[1323]: E0906 18:57:25.513735    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 18:57:25 ha-313128 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 18:57:25 ha-313128 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 18:57:25 ha-313128 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 18:57:25 ha-313128 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 18:57:25 ha-313128 kubelet[1323]: E0906 18:57:25.568772    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649045568171193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:25 ha-313128 kubelet[1323]: E0906 18:57:25.568841    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649045568171193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:35 ha-313128 kubelet[1323]: E0906 18:57:35.571057    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649055570736168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:35 ha-313128 kubelet[1323]: E0906 18:57:35.571088    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649055570736168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-313128 -n ha-313128
helpers_test.go:261: (dbg) Run:  kubectl --context ha-313128 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (56.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 3 (3.202542269s)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-313128-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:57:43.955497   29482 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:57:43.955787   29482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:43.955797   29482 out.go:358] Setting ErrFile to fd 2...
	I0906 18:57:43.955801   29482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:43.955957   29482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:57:43.956128   29482 out.go:352] Setting JSON to false
	I0906 18:57:43.956152   29482 mustload.go:65] Loading cluster: ha-313128
	I0906 18:57:43.956204   29482 notify.go:220] Checking for updates...
	I0906 18:57:43.956612   29482 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:57:43.956629   29482 status.go:255] checking status of ha-313128 ...
	I0906 18:57:43.957156   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:43.957201   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:43.976967   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34501
	I0906 18:57:43.977342   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:43.978050   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:43.978086   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:43.978412   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:43.978615   29482 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:57:43.980110   29482 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 18:57:43.980126   29482 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:57:43.980482   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:43.980527   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:43.995959   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0906 18:57:43.996345   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:43.996769   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:43.996788   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:43.997146   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:43.997529   29482 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:57:44.000187   29482 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:44.000510   29482 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:57:44.000541   29482 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:44.000615   29482 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:57:44.000938   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:44.000977   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:44.015643   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0906 18:57:44.016014   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:44.016462   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:44.016491   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:44.016766   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:44.016963   29482 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:57:44.017130   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:44.017147   29482 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:57:44.019626   29482 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:44.020118   29482 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:57:44.020151   29482 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:44.020202   29482 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:57:44.020382   29482 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:57:44.020572   29482 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:57:44.020674   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:57:44.106737   29482 ssh_runner.go:195] Run: systemctl --version
	I0906 18:57:44.113976   29482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:44.128517   29482 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:57:44.128551   29482 api_server.go:166] Checking apiserver status ...
	I0906 18:57:44.128588   29482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:57:44.143749   29482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup
	W0906 18:57:44.154810   29482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:57:44.154880   29482 ssh_runner.go:195] Run: ls
	I0906 18:57:44.159418   29482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:57:44.166178   29482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:57:44.166203   29482 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 18:57:44.166225   29482 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:44.166248   29482 status.go:255] checking status of ha-313128-m02 ...
	I0906 18:57:44.166639   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:44.166698   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:44.182190   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0906 18:57:44.182592   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:44.183046   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:44.183065   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:44.183374   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:44.183542   29482 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:57:44.185243   29482 status.go:330] ha-313128-m02 host status = "Running" (err=<nil>)
	I0906 18:57:44.185258   29482 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:57:44.185644   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:44.185686   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:44.200883   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33251
	I0906 18:57:44.201307   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:44.201787   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:44.201811   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:44.202160   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:44.202352   29482 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:57:44.204944   29482 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:44.205354   29482 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:57:44.205379   29482 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:44.205542   29482 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:57:44.205838   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:44.205892   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:44.220293   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0906 18:57:44.220687   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:44.221144   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:44.221162   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:44.221481   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:44.221633   29482 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:57:44.221852   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:44.221874   29482 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:57:44.224307   29482 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:44.224678   29482 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:57:44.224705   29482 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:44.224826   29482 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:57:44.224989   29482 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:57:44.225141   29482 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:57:44.225303   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	W0906 18:57:46.765201   29482 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:57:46.765313   29482 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0906 18:57:46.765332   29482 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:46.765341   29482 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 18:57:46.765369   29482 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:46.765379   29482 status.go:255] checking status of ha-313128-m03 ...
	I0906 18:57:46.765775   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:46.765817   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:46.782216   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36015
	I0906 18:57:46.782633   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:46.783142   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:46.783169   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:46.783477   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:46.783700   29482 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:57:46.785507   29482 status.go:330] ha-313128-m03 host status = "Running" (err=<nil>)
	I0906 18:57:46.785526   29482 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:57:46.785809   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:46.785842   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:46.801028   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I0906 18:57:46.801392   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:46.801825   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:46.801851   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:46.802133   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:46.802281   29482 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:57:46.805347   29482 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:46.805777   29482 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:57:46.805803   29482 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:46.805932   29482 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:57:46.806222   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:46.806266   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:46.821564   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46283
	I0906 18:57:46.821984   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:46.822439   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:46.822462   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:46.822727   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:46.822902   29482 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:57:46.823091   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:46.823114   29482 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:57:46.825629   29482 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:46.826108   29482 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:57:46.826128   29482 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:46.826292   29482 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:57:46.826454   29482 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:57:46.826588   29482 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:57:46.826697   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:57:46.904538   29482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:46.922295   29482 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:57:46.922334   29482 api_server.go:166] Checking apiserver status ...
	I0906 18:57:46.922369   29482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:57:46.935922   29482 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	W0906 18:57:46.946024   29482 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:57:46.946079   29482 ssh_runner.go:195] Run: ls
	I0906 18:57:46.950677   29482 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:57:46.955751   29482 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:57:46.955772   29482 status.go:422] ha-313128-m03 apiserver status = Running (err=<nil>)
	I0906 18:57:46.955781   29482 status.go:257] ha-313128-m03 status: &{Name:ha-313128-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:46.955796   29482 status.go:255] checking status of ha-313128-m04 ...
	I0906 18:57:46.956086   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:46.956120   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:46.971138   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34055
	I0906 18:57:46.971573   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:46.972044   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:46.972062   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:46.972370   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:46.972578   29482 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:57:46.974074   29482 status.go:330] ha-313128-m04 host status = "Running" (err=<nil>)
	I0906 18:57:46.974090   29482 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:57:46.974414   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:46.974475   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:46.989406   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0906 18:57:46.989800   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:46.990282   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:46.990301   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:46.990581   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:46.990763   29482 main.go:141] libmachine: (ha-313128-m04) Calling .GetIP
	I0906 18:57:46.993836   29482 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:46.994228   29482 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:57:46.994247   29482 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:46.994431   29482 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:57:46.994742   29482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:46.994789   29482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:47.009877   29482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I0906 18:57:47.010301   29482 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:47.010774   29482 main.go:141] libmachine: Using API Version  1
	I0906 18:57:47.010809   29482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:47.011124   29482 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:47.011341   29482 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:57:47.011546   29482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:47.011571   29482 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:57:47.014331   29482 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:47.014739   29482 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:57:47.014760   29482 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:47.014953   29482 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:57:47.015130   29482 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:57:47.015280   29482 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:57:47.015432   29482 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:57:47.096407   29482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:47.111984   29482 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 3 (4.856457192s)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-313128-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:57:48.436362   29581 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:57:48.436642   29581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:48.436653   29581 out.go:358] Setting ErrFile to fd 2...
	I0906 18:57:48.436660   29581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:48.436982   29581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:57:48.437199   29581 out.go:352] Setting JSON to false
	I0906 18:57:48.437232   29581 mustload.go:65] Loading cluster: ha-313128
	I0906 18:57:48.437353   29581 notify.go:220] Checking for updates...
	I0906 18:57:48.437717   29581 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:57:48.437737   29581 status.go:255] checking status of ha-313128 ...
	I0906 18:57:48.438139   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:48.438196   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:48.456406   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42725
	I0906 18:57:48.456811   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:48.457405   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:48.457428   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:48.457826   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:48.458022   29581 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:57:48.459877   29581 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 18:57:48.459896   29581 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:57:48.460188   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:48.460232   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:48.475426   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0906 18:57:48.475864   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:48.476320   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:48.476349   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:48.476633   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:48.476800   29581 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:57:48.479102   29581 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:48.479527   29581 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:57:48.479554   29581 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:48.479675   29581 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:57:48.479963   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:48.480001   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:48.495090   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45291
	I0906 18:57:48.495452   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:48.495892   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:48.495912   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:48.496252   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:48.496421   29581 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:57:48.496577   29581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:48.496604   29581 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:57:48.499484   29581 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:48.499922   29581 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:57:48.499952   29581 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:48.500089   29581 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:57:48.500237   29581 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:57:48.500397   29581 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:57:48.500552   29581 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:57:48.581571   29581 ssh_runner.go:195] Run: systemctl --version
	I0906 18:57:48.589798   29581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:48.605622   29581 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:57:48.605656   29581 api_server.go:166] Checking apiserver status ...
	I0906 18:57:48.605691   29581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:57:48.621583   29581 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup
	W0906 18:57:48.634891   29581 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:57:48.634969   29581 ssh_runner.go:195] Run: ls
	I0906 18:57:48.641133   29581 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:57:48.645439   29581 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:57:48.645464   29581 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 18:57:48.645473   29581 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:48.645488   29581 status.go:255] checking status of ha-313128-m02 ...
	I0906 18:57:48.645808   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:48.645864   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:48.661648   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0906 18:57:48.662084   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:48.662563   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:48.662581   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:48.662925   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:48.663082   29581 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:57:48.664639   29581 status.go:330] ha-313128-m02 host status = "Running" (err=<nil>)
	I0906 18:57:48.664653   29581 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:57:48.664956   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:48.665009   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:48.679873   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0906 18:57:48.680350   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:48.680897   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:48.680913   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:48.681200   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:48.681399   29581 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:57:48.684948   29581 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:48.685422   29581 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:57:48.685450   29581 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:48.685650   29581 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:57:48.686061   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:48.686108   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:48.702050   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0906 18:57:48.702504   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:48.702964   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:48.702984   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:48.703309   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:48.703538   29581 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:57:48.703739   29581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:48.703761   29581 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:57:48.706842   29581 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:48.707241   29581 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:57:48.707277   29581 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:48.707467   29581 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:57:48.707657   29581 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:57:48.707777   29581 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:57:48.707919   29581 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	W0906 18:57:49.833174   29581 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:49.833233   29581 retry.go:31] will retry after 173.675301ms: dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:57:52.905220   29581 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:57:52.905297   29581 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0906 18:57:52.905310   29581 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:52.905334   29581 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 18:57:52.905352   29581 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:52.905371   29581 status.go:255] checking status of ha-313128-m03 ...
	I0906 18:57:52.905692   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:52.905743   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:52.921252   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
	I0906 18:57:52.921667   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:52.922195   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:52.922217   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:52.922545   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:52.922759   29581 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:57:52.924314   29581 status.go:330] ha-313128-m03 host status = "Running" (err=<nil>)
	I0906 18:57:52.924332   29581 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:57:52.924611   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:52.924643   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:52.940140   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0906 18:57:52.940624   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:52.941082   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:52.941105   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:52.941406   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:52.941637   29581 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:57:52.943979   29581 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:52.944378   29581 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:57:52.944395   29581 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:52.944567   29581 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:57:52.944896   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:52.944938   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:52.959496   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I0906 18:57:52.959914   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:52.960368   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:52.960389   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:52.960688   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:52.960911   29581 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:57:52.961264   29581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:52.961282   29581 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:57:52.964094   29581 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:52.964520   29581 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:57:52.964544   29581 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:52.964699   29581 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:57:52.964883   29581 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:57:52.965055   29581 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:57:52.965208   29581 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:57:53.045271   29581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:53.062108   29581 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:57:53.062134   29581 api_server.go:166] Checking apiserver status ...
	I0906 18:57:53.062163   29581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:57:53.077792   29581 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	W0906 18:57:53.087589   29581 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:57:53.087636   29581 ssh_runner.go:195] Run: ls
	I0906 18:57:53.091840   29581 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:57:53.096202   29581 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:57:53.096226   29581 status.go:422] ha-313128-m03 apiserver status = Running (err=<nil>)
	I0906 18:57:53.096237   29581 status.go:257] ha-313128-m03 status: &{Name:ha-313128-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:53.096255   29581 status.go:255] checking status of ha-313128-m04 ...
	I0906 18:57:53.096607   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:53.096639   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:53.111844   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0906 18:57:53.112311   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:53.112816   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:53.112839   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:53.113122   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:53.113316   29581 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:57:53.114689   29581 status.go:330] ha-313128-m04 host status = "Running" (err=<nil>)
	I0906 18:57:53.114703   29581 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:57:53.114979   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:53.115007   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:53.129608   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43469
	I0906 18:57:53.130015   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:53.130415   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:53.130439   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:53.130723   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:53.130927   29581 main.go:141] libmachine: (ha-313128-m04) Calling .GetIP
	I0906 18:57:53.133524   29581 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:53.133946   29581 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:57:53.133971   29581 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:53.134087   29581 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:57:53.134424   29581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:53.134461   29581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:53.149795   29581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0906 18:57:53.150200   29581 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:53.150655   29581 main.go:141] libmachine: Using API Version  1
	I0906 18:57:53.150670   29581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:53.150975   29581 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:53.151154   29581 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:57:53.151334   29581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:53.151357   29581 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:57:53.154159   29581 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:53.154594   29581 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:57:53.154622   29581 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:53.154742   29581 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:57:53.154916   29581 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:57:53.155082   29581 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:57:53.155232   29581 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:57:53.236113   29581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:53.251247   29581 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 3 (5.307166644s)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-313128-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:57:54.135687   29682 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:57:54.135942   29682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:54.135951   29682 out.go:358] Setting ErrFile to fd 2...
	I0906 18:57:54.135964   29682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:54.136135   29682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:57:54.136295   29682 out.go:352] Setting JSON to false
	I0906 18:57:54.136325   29682 mustload.go:65] Loading cluster: ha-313128
	I0906 18:57:54.136373   29682 notify.go:220] Checking for updates...
	I0906 18:57:54.136676   29682 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:57:54.136690   29682 status.go:255] checking status of ha-313128 ...
	I0906 18:57:54.137230   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:54.137282   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:54.155909   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37073
	I0906 18:57:54.156339   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:54.156876   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:54.156902   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:54.157334   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:54.157555   29682 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:57:54.159121   29682 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 18:57:54.159134   29682 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:57:54.159397   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:54.159432   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:54.174202   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45925
	I0906 18:57:54.174600   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:54.175090   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:54.175118   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:54.175406   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:54.175597   29682 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:57:54.178363   29682 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:54.178756   29682 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:57:54.178798   29682 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:54.178921   29682 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:57:54.179265   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:54.179303   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:54.195261   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39987
	I0906 18:57:54.195619   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:54.196089   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:54.196109   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:54.196400   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:54.196581   29682 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:57:54.196748   29682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:54.196775   29682 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:57:54.199441   29682 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:54.199874   29682 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:57:54.199895   29682 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:57:54.200022   29682 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:57:54.200194   29682 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:57:54.200367   29682 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:57:54.200474   29682 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:57:54.289116   29682 ssh_runner.go:195] Run: systemctl --version
	I0906 18:57:54.295527   29682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:54.310929   29682 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:57:54.310966   29682 api_server.go:166] Checking apiserver status ...
	I0906 18:57:54.311003   29682 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:57:54.325965   29682 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup
	W0906 18:57:54.335149   29682 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:57:54.335221   29682 ssh_runner.go:195] Run: ls
	I0906 18:57:54.339933   29682 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:57:54.344224   29682 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:57:54.344257   29682 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 18:57:54.344270   29682 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:54.344290   29682 status.go:255] checking status of ha-313128-m02 ...
	I0906 18:57:54.344633   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:54.344674   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:54.359490   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I0906 18:57:54.359962   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:54.360402   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:54.360431   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:54.360711   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:54.360908   29682 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:57:54.362359   29682 status.go:330] ha-313128-m02 host status = "Running" (err=<nil>)
	I0906 18:57:54.362376   29682 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:57:54.362670   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:54.362726   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:54.377869   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41223
	I0906 18:57:54.378324   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:54.378770   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:54.378791   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:54.379100   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:54.379277   29682 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:57:54.381843   29682 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:54.382275   29682 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:57:54.382312   29682 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:54.382432   29682 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:57:54.382778   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:54.382815   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:54.397806   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35689
	I0906 18:57:54.398238   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:54.398628   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:54.398652   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:54.398971   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:54.399158   29682 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:57:54.399381   29682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:54.399400   29682 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:57:54.402100   29682 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:54.402489   29682 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:57:54.402516   29682 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:57:54.402634   29682 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:57:54.402804   29682 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:57:54.402951   29682 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:57:54.403045   29682 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	W0906 18:57:55.981191   29682 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:55.981241   29682 retry.go:31] will retry after 241.598394ms: dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:57:59.049218   29682 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:57:59.049314   29682 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0906 18:57:59.049332   29682 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:59.049341   29682 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 18:57:59.049366   29682 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:57:59.049373   29682 status.go:255] checking status of ha-313128-m03 ...
	I0906 18:57:59.049663   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:59.049717   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:59.064214   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36867
	I0906 18:57:59.064642   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:59.065152   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:59.065173   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:59.065491   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:59.065674   29682 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:57:59.067020   29682 status.go:330] ha-313128-m03 host status = "Running" (err=<nil>)
	I0906 18:57:59.067033   29682 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:57:59.067335   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:59.067376   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:59.081684   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0906 18:57:59.082047   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:59.082421   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:59.082439   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:59.082754   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:59.082908   29682 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:57:59.085614   29682 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:59.086024   29682 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:57:59.086049   29682 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:59.086194   29682 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:57:59.086572   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:59.086614   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:59.102387   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0906 18:57:59.102814   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:59.103271   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:59.103291   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:59.103607   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:59.103806   29682 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:57:59.104025   29682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:59.104045   29682 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:57:59.106768   29682 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:59.107169   29682 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:57:59.107196   29682 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:57:59.107358   29682 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:57:59.107516   29682 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:57:59.107656   29682 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:57:59.107813   29682 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:57:59.189406   29682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:59.206832   29682 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:57:59.206866   29682 api_server.go:166] Checking apiserver status ...
	I0906 18:57:59.206907   29682 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:57:59.222273   29682 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	W0906 18:57:59.232437   29682 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:57:59.232493   29682 ssh_runner.go:195] Run: ls
	I0906 18:57:59.237168   29682 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:57:59.243030   29682 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:57:59.243053   29682 status.go:422] ha-313128-m03 apiserver status = Running (err=<nil>)
	I0906 18:57:59.243066   29682 status.go:257] ha-313128-m03 status: &{Name:ha-313128-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:59.243084   29682 status.go:255] checking status of ha-313128-m04 ...
	I0906 18:57:59.243459   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:59.243498   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:59.258705   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35007
	I0906 18:57:59.259142   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:59.259612   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:59.259633   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:59.259914   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:59.260105   29682 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:57:59.261705   29682 status.go:330] ha-313128-m04 host status = "Running" (err=<nil>)
	I0906 18:57:59.261718   29682 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:57:59.262023   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:59.262077   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:59.276700   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0906 18:57:59.277132   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:59.277604   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:59.277629   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:59.277918   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:59.278083   29682 main.go:141] libmachine: (ha-313128-m04) Calling .GetIP
	I0906 18:57:59.280601   29682 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:59.281084   29682 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:57:59.281109   29682 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:59.281233   29682 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:57:59.281589   29682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:57:59.281625   29682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:57:59.297064   29682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0906 18:57:59.297503   29682 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:57:59.298016   29682 main.go:141] libmachine: Using API Version  1
	I0906 18:57:59.298038   29682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:57:59.298323   29682 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:57:59.298475   29682 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:57:59.298640   29682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:57:59.298658   29682 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:57:59.301556   29682 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:59.301975   29682 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:57:59.301992   29682 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:57:59.302141   29682 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:57:59.302303   29682 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:57:59.302458   29682 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:57:59.302592   29682 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:57:59.384361   29682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:57:59.401355   29682 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 3 (4.983005417s)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-313128-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:58:00.607074   29783 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:58:00.607329   29783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:00.607338   29783 out.go:358] Setting ErrFile to fd 2...
	I0906 18:58:00.607343   29783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:00.607557   29783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:58:00.607765   29783 out.go:352] Setting JSON to false
	I0906 18:58:00.607791   29783 mustload.go:65] Loading cluster: ha-313128
	I0906 18:58:00.607835   29783 notify.go:220] Checking for updates...
	I0906 18:58:00.608177   29783 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:58:00.608193   29783 status.go:255] checking status of ha-313128 ...
	I0906 18:58:00.608606   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:00.608670   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:00.627584   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I0906 18:58:00.628062   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:00.628672   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:00.628697   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:00.629060   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:00.629250   29783 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:58:00.630909   29783 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 18:58:00.630927   29783 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:00.631251   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:00.631291   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:00.648258   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0906 18:58:00.648612   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:00.649113   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:00.649135   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:00.649453   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:00.649630   29783 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:58:00.652251   29783 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:00.652719   29783 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:00.652751   29783 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:00.652829   29783 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:00.653209   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:00.653258   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:00.668784   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I0906 18:58:00.669134   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:00.669585   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:00.669605   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:00.669881   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:00.670064   29783 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:58:00.670262   29783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:00.670291   29783 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:58:00.672923   29783 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:00.673300   29783 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:00.673332   29783 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:00.673475   29783 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:58:00.673623   29783 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:58:00.673771   29783 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:58:00.673888   29783 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:58:00.757186   29783 ssh_runner.go:195] Run: systemctl --version
	I0906 18:58:00.764116   29783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:00.784545   29783 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:00.784585   29783 api_server.go:166] Checking apiserver status ...
	I0906 18:58:00.784624   29783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:00.807561   29783 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup
	W0906 18:58:00.818161   29783 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:00.818216   29783 ssh_runner.go:195] Run: ls
	I0906 18:58:00.823249   29783 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:00.827588   29783 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:00.827616   29783 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 18:58:00.827629   29783 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:00.827654   29783 status.go:255] checking status of ha-313128-m02 ...
	I0906 18:58:00.828063   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:00.828104   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:00.842655   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33621
	I0906 18:58:00.843049   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:00.843450   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:00.843471   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:00.843809   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:00.843956   29783 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:58:00.845591   29783 status.go:330] ha-313128-m02 host status = "Running" (err=<nil>)
	I0906 18:58:00.845610   29783 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:58:00.846005   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:00.846046   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:00.861120   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35725
	I0906 18:58:00.861609   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:00.862042   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:00.862065   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:00.862354   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:00.862514   29783 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:58:00.865196   29783 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:00.865885   29783 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:58:00.865914   29783 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:00.866016   29783 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:58:00.866425   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:00.866466   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:00.881016   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0906 18:58:00.881382   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:00.881873   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:00.881898   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:00.882207   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:00.882411   29783 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:58:00.882598   29783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:00.882619   29783 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:58:00.885249   29783 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:00.885723   29783 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:58:00.885744   29783 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:00.885958   29783 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:58:00.886153   29783 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:58:00.886320   29783 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:58:00.886476   29783 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	W0906 18:58:02.121177   29783 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:58:02.121228   29783 retry.go:31] will retry after 354.909804ms: dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:58:05.193156   29783 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:58:05.193238   29783 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0906 18:58:05.193268   29783 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:58:05.193275   29783 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 18:58:05.193309   29783 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:58:05.193316   29783 status.go:255] checking status of ha-313128-m03 ...
	I0906 18:58:05.193619   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:05.193661   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:05.208260   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35805
	I0906 18:58:05.208671   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:05.209378   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:05.209409   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:05.209694   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:05.209897   29783 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:58:05.211553   29783 status.go:330] ha-313128-m03 host status = "Running" (err=<nil>)
	I0906 18:58:05.211568   29783 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:05.211839   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:05.211888   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:05.226217   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I0906 18:58:05.226553   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:05.226966   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:05.226998   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:05.227276   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:05.227467   29783 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:58:05.230080   29783 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:05.230437   29783 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:05.230455   29783 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:05.230595   29783 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:05.230870   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:05.230904   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:05.245084   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34353
	I0906 18:58:05.245461   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:05.245853   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:05.245869   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:05.246150   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:05.246310   29783 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:58:05.246487   29783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:05.246517   29783 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:58:05.249011   29783 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:05.249383   29783 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:05.249422   29783 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:05.249582   29783 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:58:05.249764   29783 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:58:05.249939   29783 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:58:05.250179   29783 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:58:05.337048   29783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:05.353426   29783 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:05.353452   29783 api_server.go:166] Checking apiserver status ...
	I0906 18:58:05.353493   29783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:05.367289   29783 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	W0906 18:58:05.377568   29783 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:05.377630   29783 ssh_runner.go:195] Run: ls
	I0906 18:58:05.383057   29783 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:05.389315   29783 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:05.389343   29783 status.go:422] ha-313128-m03 apiserver status = Running (err=<nil>)
	I0906 18:58:05.389352   29783 status.go:257] ha-313128-m03 status: &{Name:ha-313128-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:05.389366   29783 status.go:255] checking status of ha-313128-m04 ...
	I0906 18:58:05.389688   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:05.389725   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:05.406724   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46203
	I0906 18:58:05.407176   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:05.407652   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:05.407679   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:05.407988   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:05.408162   29783 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:58:05.409657   29783 status.go:330] ha-313128-m04 host status = "Running" (err=<nil>)
	I0906 18:58:05.409673   29783 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:05.409956   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:05.409996   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:05.425236   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43103
	I0906 18:58:05.425785   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:05.426309   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:05.426337   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:05.426701   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:05.426876   29783 main.go:141] libmachine: (ha-313128-m04) Calling .GetIP
	I0906 18:58:05.429630   29783 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:05.430008   29783 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:05.430031   29783 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:05.430204   29783 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:05.430483   29783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:05.430528   29783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:05.445392   29783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
	I0906 18:58:05.445742   29783 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:05.446155   29783 main.go:141] libmachine: Using API Version  1
	I0906 18:58:05.446182   29783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:05.446459   29783 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:05.446630   29783 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:58:05.446804   29783 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:05.446824   29783 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:58:05.449473   29783 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:05.449901   29783 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:05.449936   29783 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:05.450097   29783 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:58:05.450267   29783 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:58:05.450426   29783 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:58:05.450557   29783 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:58:05.532767   29783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:05.547543   29783 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 3 (4.046127984s)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-313128-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:58:07.941511   29899 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:58:07.941761   29899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:07.941771   29899 out.go:358] Setting ErrFile to fd 2...
	I0906 18:58:07.941777   29899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:07.941961   29899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:58:07.942156   29899 out.go:352] Setting JSON to false
	I0906 18:58:07.942184   29899 mustload.go:65] Loading cluster: ha-313128
	I0906 18:58:07.942273   29899 notify.go:220] Checking for updates...
	I0906 18:58:07.942685   29899 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:58:07.942704   29899 status.go:255] checking status of ha-313128 ...
	I0906 18:58:07.943113   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:07.943170   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:07.961165   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0906 18:58:07.961689   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:07.962348   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:07.962370   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:07.962722   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:07.962902   29899 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:58:07.964605   29899 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 18:58:07.964624   29899 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:07.964997   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:07.965029   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:07.980690   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38117
	I0906 18:58:07.981144   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:07.981592   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:07.981610   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:07.981950   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:07.982175   29899 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:58:07.984648   29899 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:07.985025   29899 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:07.985055   29899 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:07.985159   29899 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:07.985495   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:07.985537   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:07.999991   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39793
	I0906 18:58:08.000337   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:08.000782   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:08.000801   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:08.001105   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:08.001281   29899 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:58:08.001459   29899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:08.001481   29899 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:58:08.003866   29899 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:08.004253   29899 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:08.004284   29899 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:08.004417   29899 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:58:08.004575   29899 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:58:08.004708   29899 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:58:08.004889   29899 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:58:08.089713   29899 ssh_runner.go:195] Run: systemctl --version
	I0906 18:58:08.095736   29899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:08.113192   29899 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:08.113225   29899 api_server.go:166] Checking apiserver status ...
	I0906 18:58:08.113267   29899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:08.132280   29899 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup
	W0906 18:58:08.145308   29899 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:08.145364   29899 ssh_runner.go:195] Run: ls
	I0906 18:58:08.150059   29899 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:08.156343   29899 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:08.156369   29899 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 18:58:08.156378   29899 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:08.156400   29899 status.go:255] checking status of ha-313128-m02 ...
	I0906 18:58:08.156691   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:08.156721   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:08.171510   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0906 18:58:08.171904   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:08.172383   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:08.172409   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:08.172705   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:08.172957   29899 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:58:08.174443   29899 status.go:330] ha-313128-m02 host status = "Running" (err=<nil>)
	I0906 18:58:08.174457   29899 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:58:08.174749   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:08.174781   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:08.190912   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I0906 18:58:08.191312   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:08.191727   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:08.191748   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:08.192214   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:08.192420   29899 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:58:08.195023   29899 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:08.195515   29899 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:58:08.195543   29899 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:08.195722   29899 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:58:08.196032   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:08.196093   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:08.210908   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I0906 18:58:08.211320   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:08.211752   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:08.211771   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:08.212067   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:08.212233   29899 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:58:08.212414   29899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:08.212432   29899 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:58:08.214848   29899 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:08.215243   29899 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:58:08.215279   29899 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:08.215420   29899 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:58:08.215571   29899 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:58:08.215773   29899 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:58:08.215916   29899 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	W0906 18:58:08.269113   29899 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:58:08.269177   29899 retry.go:31] will retry after 266.108653ms: dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:58:11.593109   29899 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:58:11.593200   29899 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0906 18:58:11.593233   29899 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:58:11.593244   29899 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 18:58:11.593268   29899 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:58:11.593285   29899 status.go:255] checking status of ha-313128-m03 ...
	I0906 18:58:11.593723   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:11.593775   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:11.609246   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0906 18:58:11.609660   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:11.610093   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:11.610116   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:11.610425   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:11.610607   29899 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:58:11.612112   29899 status.go:330] ha-313128-m03 host status = "Running" (err=<nil>)
	I0906 18:58:11.612132   29899 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:11.612432   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:11.612471   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:11.627507   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43377
	I0906 18:58:11.627947   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:11.628386   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:11.628405   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:11.628707   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:11.628929   29899 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:58:11.631834   29899 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:11.632254   29899 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:11.632278   29899 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:11.632471   29899 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:11.632772   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:11.632812   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:11.649525   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I0906 18:58:11.649934   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:11.650408   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:11.650429   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:11.650728   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:11.650911   29899 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:58:11.651078   29899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:11.651097   29899 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:58:11.653951   29899 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:11.654396   29899 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:11.654417   29899 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:11.654543   29899 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:58:11.654709   29899 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:58:11.654866   29899 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:58:11.654990   29899 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:58:11.736637   29899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:11.752130   29899 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:11.752160   29899 api_server.go:166] Checking apiserver status ...
	I0906 18:58:11.752198   29899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:11.766234   29899 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	W0906 18:58:11.776296   29899 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:11.776354   29899 ssh_runner.go:195] Run: ls
	I0906 18:58:11.781311   29899 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:11.787812   29899 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:11.787835   29899 status.go:422] ha-313128-m03 apiserver status = Running (err=<nil>)
	I0906 18:58:11.787844   29899 status.go:257] ha-313128-m03 status: &{Name:ha-313128-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:11.787859   29899 status.go:255] checking status of ha-313128-m04 ...
	I0906 18:58:11.788169   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:11.788202   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:11.804268   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34981
	I0906 18:58:11.804678   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:11.805178   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:11.805199   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:11.805465   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:11.805670   29899 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:58:11.807376   29899 status.go:330] ha-313128-m04 host status = "Running" (err=<nil>)
	I0906 18:58:11.807390   29899 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:11.807694   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:11.807732   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:11.823197   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46635
	I0906 18:58:11.823587   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:11.824058   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:11.824080   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:11.824403   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:11.824589   29899 main.go:141] libmachine: (ha-313128-m04) Calling .GetIP
	I0906 18:58:11.827429   29899 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:11.827789   29899 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:11.827824   29899 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:11.827957   29899 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:11.828242   29899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:11.828276   29899 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:11.842632   29899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36557
	I0906 18:58:11.843074   29899 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:11.843617   29899 main.go:141] libmachine: Using API Version  1
	I0906 18:58:11.843642   29899 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:11.843965   29899 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:11.844117   29899 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:58:11.844296   29899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:11.844315   29899 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:58:11.846853   29899 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:11.847214   29899 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:11.847253   29899 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:11.847413   29899 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:58:11.847606   29899 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:58:11.847745   29899 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:58:11.847871   29899 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:58:11.928758   29899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:11.945529   29899 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 3 (3.740171934s)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-313128-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:58:15.160459   30000 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:58:15.160728   30000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:15.160739   30000 out.go:358] Setting ErrFile to fd 2...
	I0906 18:58:15.160747   30000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:15.160967   30000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:58:15.161174   30000 out.go:352] Setting JSON to false
	I0906 18:58:15.161217   30000 mustload.go:65] Loading cluster: ha-313128
	I0906 18:58:15.161327   30000 notify.go:220] Checking for updates...
	I0906 18:58:15.161636   30000 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:58:15.161654   30000 status.go:255] checking status of ha-313128 ...
	I0906 18:58:15.162030   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:15.162106   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:15.179933   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
	I0906 18:58:15.180393   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:15.180951   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:15.180969   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:15.181309   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:15.181488   30000 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:58:15.182928   30000 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 18:58:15.182949   30000 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:15.183277   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:15.183314   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:15.198607   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43801
	I0906 18:58:15.199016   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:15.199453   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:15.199491   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:15.199824   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:15.200017   30000 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:58:15.202639   30000 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:15.203064   30000 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:15.203090   30000 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:15.203279   30000 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:15.203674   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:15.203729   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:15.218955   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I0906 18:58:15.219377   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:15.219865   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:15.219888   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:15.220184   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:15.220373   30000 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:58:15.220565   30000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:15.220590   30000 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:58:15.223360   30000 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:15.223785   30000 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:15.223815   30000 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:15.223947   30000 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:58:15.224111   30000 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:58:15.224251   30000 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:58:15.224390   30000 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:58:15.309749   30000 ssh_runner.go:195] Run: systemctl --version
	I0906 18:58:15.315516   30000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:15.330557   30000 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:15.330589   30000 api_server.go:166] Checking apiserver status ...
	I0906 18:58:15.330627   30000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:15.344215   30000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup
	W0906 18:58:15.354747   30000 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:15.354799   30000 ssh_runner.go:195] Run: ls
	I0906 18:58:15.358985   30000 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:15.364773   30000 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:15.364800   30000 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 18:58:15.364813   30000 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:15.364830   30000 status.go:255] checking status of ha-313128-m02 ...
	I0906 18:58:15.365229   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:15.365274   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:15.380818   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0906 18:58:15.381199   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:15.381670   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:15.381697   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:15.382038   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:15.382245   30000 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:58:15.383799   30000 status.go:330] ha-313128-m02 host status = "Running" (err=<nil>)
	I0906 18:58:15.383815   30000 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:58:15.384200   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:15.384261   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:15.399479   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36037
	I0906 18:58:15.399885   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:15.400346   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:15.400366   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:15.400675   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:15.400852   30000 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:58:15.403692   30000 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:15.404136   30000 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:58:15.404164   30000 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:15.404337   30000 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 18:58:15.404749   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:15.404792   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:15.420426   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I0906 18:58:15.420912   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:15.421398   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:15.421424   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:15.421797   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:15.422017   30000 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:58:15.422220   30000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:15.422239   30000 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:58:15.425064   30000 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:15.425462   30000 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:58:15.425503   30000 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:58:15.425598   30000 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:58:15.425755   30000 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:58:15.425894   30000 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:58:15.426029   30000 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	W0906 18:58:18.505115   30000 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	W0906 18:58:18.505188   30000 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0906 18:58:18.505202   30000 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:58:18.505208   30000 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 18:58:18.505226   30000 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 18:58:18.505232   30000 status.go:255] checking status of ha-313128-m03 ...
	I0906 18:58:18.505601   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:18.505648   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:18.520252   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0906 18:58:18.520643   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:18.521086   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:18.521108   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:18.521433   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:18.521645   30000 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:58:18.523183   30000 status.go:330] ha-313128-m03 host status = "Running" (err=<nil>)
	I0906 18:58:18.523198   30000 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:18.523511   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:18.523553   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:18.537921   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43419
	I0906 18:58:18.538356   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:18.538801   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:18.538821   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:18.539104   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:18.539300   30000 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:58:18.542243   30000 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:18.542646   30000 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:18.542681   30000 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:18.542797   30000 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:18.543102   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:18.543148   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:18.557153   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0906 18:58:18.557505   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:18.558010   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:18.558026   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:18.558298   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:18.558492   30000 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:58:18.558688   30000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:18.558707   30000 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:58:18.561096   30000 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:18.561417   30000 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:18.561437   30000 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:18.561571   30000 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:58:18.561724   30000 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:58:18.561878   30000 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:58:18.562037   30000 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:58:18.640450   30000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:18.656152   30000 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:18.656180   30000 api_server.go:166] Checking apiserver status ...
	I0906 18:58:18.656217   30000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:18.675047   30000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	W0906 18:58:18.685245   30000 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:18.685304   30000 ssh_runner.go:195] Run: ls
	I0906 18:58:18.690061   30000 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:18.697475   30000 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:18.697502   30000 status.go:422] ha-313128-m03 apiserver status = Running (err=<nil>)
	I0906 18:58:18.697512   30000 status.go:257] ha-313128-m03 status: &{Name:ha-313128-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:18.697533   30000 status.go:255] checking status of ha-313128-m04 ...
	I0906 18:58:18.697833   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:18.697872   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:18.712604   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34839
	I0906 18:58:18.713075   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:18.713554   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:18.713579   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:18.713854   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:18.714024   30000 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:58:18.715573   30000 status.go:330] ha-313128-m04 host status = "Running" (err=<nil>)
	I0906 18:58:18.715591   30000 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:18.715870   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:18.715907   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:18.731045   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33129
	I0906 18:58:18.731425   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:18.731880   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:18.731902   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:18.732192   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:18.732424   30000 main.go:141] libmachine: (ha-313128-m04) Calling .GetIP
	I0906 18:58:18.734893   30000 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:18.735328   30000 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:18.735354   30000 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:18.735499   30000 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:18.735781   30000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:18.735814   30000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:18.750063   30000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46613
	I0906 18:58:18.750469   30000 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:18.750921   30000 main.go:141] libmachine: Using API Version  1
	I0906 18:58:18.750943   30000 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:18.751191   30000 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:18.751380   30000 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:58:18.751588   30000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:18.751609   30000 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:58:18.754257   30000 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:18.754609   30000 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:18.754636   30000 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:18.754766   30000 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:58:18.754931   30000 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:58:18.755128   30000 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:58:18.755266   30000 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:58:18.840659   30000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:18.857228   30000 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 7 (618.178367ms)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-313128-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:58:24.673071   30137 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:58:24.673309   30137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:24.673318   30137 out.go:358] Setting ErrFile to fd 2...
	I0906 18:58:24.673322   30137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:24.673497   30137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:58:24.673643   30137 out.go:352] Setting JSON to false
	I0906 18:58:24.673666   30137 mustload.go:65] Loading cluster: ha-313128
	I0906 18:58:24.673718   30137 notify.go:220] Checking for updates...
	I0906 18:58:24.674011   30137 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:58:24.674023   30137 status.go:255] checking status of ha-313128 ...
	I0906 18:58:24.674353   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:24.674408   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:24.694724   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0906 18:58:24.695203   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:24.695829   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:24.695848   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:24.696191   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:24.696384   30137 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:58:24.698121   30137 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 18:58:24.698138   30137 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:24.698440   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:24.698478   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:24.712978   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35137
	I0906 18:58:24.713340   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:24.713793   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:24.713812   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:24.714095   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:24.714290   30137 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:58:24.717139   30137 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:24.717598   30137 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:24.717625   30137 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:24.717790   30137 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:24.718098   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:24.718145   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:24.733776   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36775
	I0906 18:58:24.734104   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:24.734624   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:24.734647   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:24.734937   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:24.735129   30137 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:58:24.735335   30137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:24.735356   30137 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:58:24.738477   30137 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:24.738888   30137 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:24.738911   30137 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:24.739030   30137 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:58:24.739193   30137 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:58:24.739346   30137 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:58:24.739492   30137 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:58:24.820652   30137 ssh_runner.go:195] Run: systemctl --version
	I0906 18:58:24.826696   30137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:24.841582   30137 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:24.841615   30137 api_server.go:166] Checking apiserver status ...
	I0906 18:58:24.841647   30137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:24.856553   30137 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup
	W0906 18:58:24.867217   30137 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:24.867299   30137 ssh_runner.go:195] Run: ls
	I0906 18:58:24.872888   30137 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:24.879169   30137 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:24.879202   30137 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 18:58:24.879216   30137 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:24.879257   30137 status.go:255] checking status of ha-313128-m02 ...
	I0906 18:58:24.879543   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:24.879576   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:24.894485   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32845
	I0906 18:58:24.894939   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:24.895429   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:24.895458   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:24.895754   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:24.895936   30137 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:58:24.897515   30137 status.go:330] ha-313128-m02 host status = "Stopped" (err=<nil>)
	I0906 18:58:24.897531   30137 status.go:343] host is not running, skipping remaining checks
	I0906 18:58:24.897539   30137 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:24.897560   30137 status.go:255] checking status of ha-313128-m03 ...
	I0906 18:58:24.897864   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:24.897897   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:24.913287   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I0906 18:58:24.913765   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:24.914207   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:24.914225   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:24.914551   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:24.914709   30137 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:58:24.916396   30137 status.go:330] ha-313128-m03 host status = "Running" (err=<nil>)
	I0906 18:58:24.916412   30137 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:24.916718   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:24.916753   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:24.931395   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38935
	I0906 18:58:24.931836   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:24.932390   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:24.932411   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:24.932754   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:24.932949   30137 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:58:24.935792   30137 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:24.936177   30137 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:24.936216   30137 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:24.936338   30137 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:24.936639   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:24.936672   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:24.951814   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0906 18:58:24.952211   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:24.952682   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:24.952703   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:24.953036   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:24.953208   30137 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:58:24.953380   30137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:24.953398   30137 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:58:24.956035   30137 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:24.956482   30137 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:24.956510   30137 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:24.956674   30137 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:58:24.956847   30137 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:58:24.957021   30137 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:58:24.957155   30137 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:58:25.036723   30137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:25.052260   30137 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:25.052288   30137 api_server.go:166] Checking apiserver status ...
	I0906 18:58:25.052322   30137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:25.067038   30137 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	W0906 18:58:25.078541   30137 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:25.078603   30137 ssh_runner.go:195] Run: ls
	I0906 18:58:25.084010   30137 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:25.089222   30137 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:25.089249   30137 status.go:422] ha-313128-m03 apiserver status = Running (err=<nil>)
	I0906 18:58:25.089259   30137 status.go:257] ha-313128-m03 status: &{Name:ha-313128-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:25.089278   30137 status.go:255] checking status of ha-313128-m04 ...
	I0906 18:58:25.089564   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:25.089602   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:25.105301   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46697
	I0906 18:58:25.105747   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:25.106199   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:25.106235   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:25.106546   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:25.106734   30137 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:58:25.108393   30137 status.go:330] ha-313128-m04 host status = "Running" (err=<nil>)
	I0906 18:58:25.108409   30137 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:25.108714   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:25.108754   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:25.123529   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41805
	I0906 18:58:25.123892   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:25.124292   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:25.124311   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:25.124636   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:25.124822   30137 main.go:141] libmachine: (ha-313128-m04) Calling .GetIP
	I0906 18:58:25.127551   30137 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:25.127956   30137 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:25.127980   30137 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:25.128125   30137 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:25.128412   30137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:25.128444   30137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:25.142896   30137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40627
	I0906 18:58:25.143291   30137 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:25.143786   30137 main.go:141] libmachine: Using API Version  1
	I0906 18:58:25.143803   30137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:25.144133   30137 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:25.144321   30137 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:58:25.144516   30137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:25.144540   30137 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:58:25.147243   30137 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:25.147589   30137 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:25.147614   30137 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:25.147773   30137 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:58:25.147965   30137 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:58:25.148113   30137 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:58:25.148247   30137 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:58:25.232066   30137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:25.247784   30137 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 7 (623.945398ms)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-313128-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:58:37.373817   30264 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:58:37.373911   30264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:37.373919   30264 out.go:358] Setting ErrFile to fd 2...
	I0906 18:58:37.373923   30264 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:37.374108   30264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:58:37.374267   30264 out.go:352] Setting JSON to false
	I0906 18:58:37.374295   30264 mustload.go:65] Loading cluster: ha-313128
	I0906 18:58:37.374461   30264 notify.go:220] Checking for updates...
	I0906 18:58:37.374717   30264 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:58:37.374733   30264 status.go:255] checking status of ha-313128 ...
	I0906 18:58:37.375128   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.375169   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.394927   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40389
	I0906 18:58:37.395368   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.395889   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.395912   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.396214   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.396401   30264 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:58:37.397984   30264 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 18:58:37.397997   30264 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:37.398307   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.398352   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.413694   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39171
	I0906 18:58:37.414121   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.414647   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.414667   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.414979   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.415130   30264 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:58:37.417685   30264 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:37.418063   30264 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:37.418083   30264 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:37.418246   30264 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:58:37.418648   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.418690   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.433492   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42663
	I0906 18:58:37.433989   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.434565   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.434587   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.434936   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.435112   30264 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:58:37.435315   30264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:37.435353   30264 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:58:37.438892   30264 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:37.439134   30264 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:58:37.439194   30264 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:58:37.439322   30264 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:58:37.439526   30264 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:58:37.439699   30264 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:58:37.439858   30264 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:58:37.525583   30264 ssh_runner.go:195] Run: systemctl --version
	I0906 18:58:37.532172   30264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:37.548333   30264 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:37.548366   30264 api_server.go:166] Checking apiserver status ...
	I0906 18:58:37.548398   30264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:37.565083   30264 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup
	W0906 18:58:37.576670   30264 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1145/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:37.576740   30264 ssh_runner.go:195] Run: ls
	I0906 18:58:37.581705   30264 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:37.585723   30264 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:37.585746   30264 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 18:58:37.585758   30264 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:37.585779   30264 status.go:255] checking status of ha-313128-m02 ...
	I0906 18:58:37.586149   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.586189   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.602363   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I0906 18:58:37.602828   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.603293   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.603319   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.603590   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.603808   30264 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:58:37.605423   30264 status.go:330] ha-313128-m02 host status = "Stopped" (err=<nil>)
	I0906 18:58:37.605450   30264 status.go:343] host is not running, skipping remaining checks
	I0906 18:58:37.605465   30264 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:37.605488   30264 status.go:255] checking status of ha-313128-m03 ...
	I0906 18:58:37.605801   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.605847   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.620329   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I0906 18:58:37.620787   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.621229   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.621250   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.621537   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.621716   30264 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:58:37.623153   30264 status.go:330] ha-313128-m03 host status = "Running" (err=<nil>)
	I0906 18:58:37.623171   30264 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:37.623505   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.623541   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.638849   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44773
	I0906 18:58:37.639232   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.639697   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.639720   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.640071   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.640264   30264 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:58:37.643117   30264 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:37.643564   30264 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:37.643591   30264 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:37.643900   30264 host.go:66] Checking if "ha-313128-m03" exists ...
	I0906 18:58:37.644211   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.644252   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.659516   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40329
	I0906 18:58:37.659947   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.660401   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.660423   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.660720   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.660909   30264 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:58:37.661089   30264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:37.661111   30264 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:58:37.663667   30264 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:37.664030   30264 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:37.664055   30264 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:37.664137   30264 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:58:37.664304   30264 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:58:37.664576   30264 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:58:37.664740   30264 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:58:37.744523   30264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:37.761323   30264 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 18:58:37.761348   30264 api_server.go:166] Checking apiserver status ...
	I0906 18:58:37.761378   30264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:58:37.776241   30264 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup
	W0906 18:58:37.787735   30264 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1526/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 18:58:37.787803   30264 ssh_runner.go:195] Run: ls
	I0906 18:58:37.793078   30264 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 18:58:37.799463   30264 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 18:58:37.799487   30264 status.go:422] ha-313128-m03 apiserver status = Running (err=<nil>)
	I0906 18:58:37.799495   30264 status.go:257] ha-313128-m03 status: &{Name:ha-313128-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:58:37.799513   30264 status.go:255] checking status of ha-313128-m04 ...
	I0906 18:58:37.799823   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.799867   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.815266   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34255
	I0906 18:58:37.815734   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.816258   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.816283   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.816610   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.816826   30264 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:58:37.818314   30264 status.go:330] ha-313128-m04 host status = "Running" (err=<nil>)
	I0906 18:58:37.818329   30264 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:37.818616   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.818653   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.834003   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0906 18:58:37.834414   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.834854   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.834874   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.835178   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.835368   30264 main.go:141] libmachine: (ha-313128-m04) Calling .GetIP
	I0906 18:58:37.837888   30264 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:37.838283   30264 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:37.838312   30264 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:37.838435   30264 host.go:66] Checking if "ha-313128-m04" exists ...
	I0906 18:58:37.838836   30264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:37.838874   30264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:37.853091   30264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0906 18:58:37.853516   30264 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:37.854012   30264 main.go:141] libmachine: Using API Version  1
	I0906 18:58:37.854039   30264 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:37.854384   30264 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:37.854551   30264 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:58:37.854733   30264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:58:37.854753   30264 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:58:37.857715   30264 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:37.858113   30264 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:37.858139   30264 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:37.858297   30264 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:58:37.858469   30264 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:58:37.858619   30264 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:58:37.858760   30264 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:58:37.940225   30264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:58:37.957056   30264 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-313128 -n ha-313128
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-313128 logs -n 25: (1.417025751s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128:/home/docker/cp-test_ha-313128-m03_ha-313128.txt                       |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128 sudo cat                                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128.txt                                 |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m02:/home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m04 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp testdata/cp-test.txt                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128:/home/docker/cp-test_ha-313128-m04_ha-313128.txt                       |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128 sudo cat                                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128.txt                                 |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m02:/home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03:/home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m03 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-313128 node stop m02 -v=7                                                     | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-313128 node start m02 -v=7                                                    | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:50:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:50:42.241342   24633 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:50:42.241614   24633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:42.241623   24633 out.go:358] Setting ErrFile to fd 2...
	I0906 18:50:42.241627   24633 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:42.241844   24633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:50:42.242402   24633 out.go:352] Setting JSON to false
	I0906 18:50:42.243240   24633 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1991,"bootTime":1725646651,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:50:42.243295   24633 start.go:139] virtualization: kvm guest
	I0906 18:50:42.245178   24633 out.go:177] * [ha-313128] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:50:42.246461   24633 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:50:42.246466   24633 notify.go:220] Checking for updates...
	I0906 18:50:42.247673   24633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:50:42.249313   24633 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:50:42.250474   24633 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:50:42.251672   24633 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:50:42.252739   24633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:50:42.253949   24633 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:50:42.288794   24633 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 18:50:42.289936   24633 start.go:297] selected driver: kvm2
	I0906 18:50:42.289949   24633 start.go:901] validating driver "kvm2" against <nil>
	I0906 18:50:42.289962   24633 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:50:42.290679   24633 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:50:42.290744   24633 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 18:50:42.305815   24633 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 18:50:42.305868   24633 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:50:42.306084   24633 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:50:42.306137   24633 cni.go:84] Creating CNI manager for ""
	I0906 18:50:42.306149   24633 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0906 18:50:42.306154   24633 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 18:50:42.306207   24633 start.go:340] cluster config:
	{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0906 18:50:42.306307   24633 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:50:42.307955   24633 out.go:177] * Starting "ha-313128" primary control-plane node in "ha-313128" cluster
	I0906 18:50:42.309081   24633 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:50:42.309113   24633 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 18:50:42.309125   24633 cache.go:56] Caching tarball of preloaded images
	I0906 18:50:42.309203   24633 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 18:50:42.309216   24633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 18:50:42.309557   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:50:42.309582   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json: {Name:mk2b5aaa86bcacd8dc1788c104cd70b3467204ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:50:42.309744   24633 start.go:360] acquireMachinesLock for ha-313128: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:50:42.309777   24633 start.go:364] duration metric: took 18.419µs to acquireMachinesLock for "ha-313128"
	I0906 18:50:42.309804   24633 start.go:93] Provisioning new machine with config: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:defau
lt APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:50:42.309860   24633 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 18:50:42.311483   24633 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 18:50:42.311612   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:50:42.311656   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:50:42.325721   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35079
	I0906 18:50:42.326175   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:50:42.326691   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:50:42.326710   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:50:42.327026   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:50:42.327156   24633 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 18:50:42.327294   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:50:42.327414   24633 start.go:159] libmachine.API.Create for "ha-313128" (driver="kvm2")
	I0906 18:50:42.327441   24633 client.go:168] LocalClient.Create starting
	I0906 18:50:42.327469   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 18:50:42.327502   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:50:42.327523   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:50:42.327577   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 18:50:42.327595   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:50:42.327608   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:50:42.327627   24633 main.go:141] libmachine: Running pre-create checks...
	I0906 18:50:42.327635   24633 main.go:141] libmachine: (ha-313128) Calling .PreCreateCheck
	I0906 18:50:42.327947   24633 main.go:141] libmachine: (ha-313128) Calling .GetConfigRaw
	I0906 18:50:42.328330   24633 main.go:141] libmachine: Creating machine...
	I0906 18:50:42.328348   24633 main.go:141] libmachine: (ha-313128) Calling .Create
	I0906 18:50:42.328448   24633 main.go:141] libmachine: (ha-313128) Creating KVM machine...
	I0906 18:50:42.329656   24633 main.go:141] libmachine: (ha-313128) DBG | found existing default KVM network
	I0906 18:50:42.330288   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.330161   24656 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0906 18:50:42.330322   24633 main.go:141] libmachine: (ha-313128) DBG | created network xml: 
	I0906 18:50:42.330340   24633 main.go:141] libmachine: (ha-313128) DBG | <network>
	I0906 18:50:42.330350   24633 main.go:141] libmachine: (ha-313128) DBG |   <name>mk-ha-313128</name>
	I0906 18:50:42.330355   24633 main.go:141] libmachine: (ha-313128) DBG |   <dns enable='no'/>
	I0906 18:50:42.330360   24633 main.go:141] libmachine: (ha-313128) DBG |   
	I0906 18:50:42.330367   24633 main.go:141] libmachine: (ha-313128) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0906 18:50:42.330373   24633 main.go:141] libmachine: (ha-313128) DBG |     <dhcp>
	I0906 18:50:42.330381   24633 main.go:141] libmachine: (ha-313128) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0906 18:50:42.330387   24633 main.go:141] libmachine: (ha-313128) DBG |     </dhcp>
	I0906 18:50:42.330394   24633 main.go:141] libmachine: (ha-313128) DBG |   </ip>
	I0906 18:50:42.330399   24633 main.go:141] libmachine: (ha-313128) DBG |   
	I0906 18:50:42.330406   24633 main.go:141] libmachine: (ha-313128) DBG | </network>
	I0906 18:50:42.330412   24633 main.go:141] libmachine: (ha-313128) DBG | 
	I0906 18:50:42.335419   24633 main.go:141] libmachine: (ha-313128) DBG | trying to create private KVM network mk-ha-313128 192.168.39.0/24...
	I0906 18:50:42.399184   24633 main.go:141] libmachine: (ha-313128) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128 ...
	I0906 18:50:42.399215   24633 main.go:141] libmachine: (ha-313128) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:50:42.399226   24633 main.go:141] libmachine: (ha-313128) DBG | private KVM network mk-ha-313128 192.168.39.0/24 created
	I0906 18:50:42.399261   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.399132   24656 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:50:42.399285   24633 main.go:141] libmachine: (ha-313128) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:50:42.637821   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.637701   24656 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa...
	I0906 18:50:42.786449   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.786308   24656 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/ha-313128.rawdisk...
	I0906 18:50:42.786491   24633 main.go:141] libmachine: (ha-313128) DBG | Writing magic tar header
	I0906 18:50:42.786508   24633 main.go:141] libmachine: (ha-313128) DBG | Writing SSH key tar header
	I0906 18:50:42.786520   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:42.786456   24656 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128 ...
	I0906 18:50:42.786635   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128
	I0906 18:50:42.786668   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 18:50:42.786681   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128 (perms=drwx------)
	I0906 18:50:42.786712   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:50:42.786722   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 18:50:42.786736   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 18:50:42.786751   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:50:42.786761   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:50:42.786775   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 18:50:42.786787   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:50:42.786799   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:50:42.786808   24633 main.go:141] libmachine: (ha-313128) DBG | Checking permissions on dir: /home
	I0906 18:50:42.786816   24633 main.go:141] libmachine: (ha-313128) DBG | Skipping /home - not owner
	I0906 18:50:42.786825   24633 main.go:141] libmachine: (ha-313128) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:50:42.786831   24633 main.go:141] libmachine: (ha-313128) Creating domain...
	I0906 18:50:42.787754   24633 main.go:141] libmachine: (ha-313128) define libvirt domain using xml: 
	I0906 18:50:42.787766   24633 main.go:141] libmachine: (ha-313128) <domain type='kvm'>
	I0906 18:50:42.787772   24633 main.go:141] libmachine: (ha-313128)   <name>ha-313128</name>
	I0906 18:50:42.787782   24633 main.go:141] libmachine: (ha-313128)   <memory unit='MiB'>2200</memory>
	I0906 18:50:42.787791   24633 main.go:141] libmachine: (ha-313128)   <vcpu>2</vcpu>
	I0906 18:50:42.787814   24633 main.go:141] libmachine: (ha-313128)   <features>
	I0906 18:50:42.787827   24633 main.go:141] libmachine: (ha-313128)     <acpi/>
	I0906 18:50:42.787831   24633 main.go:141] libmachine: (ha-313128)     <apic/>
	I0906 18:50:42.787836   24633 main.go:141] libmachine: (ha-313128)     <pae/>
	I0906 18:50:42.787844   24633 main.go:141] libmachine: (ha-313128)     
	I0906 18:50:42.787850   24633 main.go:141] libmachine: (ha-313128)   </features>
	I0906 18:50:42.787857   24633 main.go:141] libmachine: (ha-313128)   <cpu mode='host-passthrough'>
	I0906 18:50:42.787865   24633 main.go:141] libmachine: (ha-313128)   
	I0906 18:50:42.787874   24633 main.go:141] libmachine: (ha-313128)   </cpu>
	I0906 18:50:42.787888   24633 main.go:141] libmachine: (ha-313128)   <os>
	I0906 18:50:42.787898   24633 main.go:141] libmachine: (ha-313128)     <type>hvm</type>
	I0906 18:50:42.787909   24633 main.go:141] libmachine: (ha-313128)     <boot dev='cdrom'/>
	I0906 18:50:42.787921   24633 main.go:141] libmachine: (ha-313128)     <boot dev='hd'/>
	I0906 18:50:42.787929   24633 main.go:141] libmachine: (ha-313128)     <bootmenu enable='no'/>
	I0906 18:50:42.787935   24633 main.go:141] libmachine: (ha-313128)   </os>
	I0906 18:50:42.787940   24633 main.go:141] libmachine: (ha-313128)   <devices>
	I0906 18:50:42.787947   24633 main.go:141] libmachine: (ha-313128)     <disk type='file' device='cdrom'>
	I0906 18:50:42.787955   24633 main.go:141] libmachine: (ha-313128)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/boot2docker.iso'/>
	I0906 18:50:42.787962   24633 main.go:141] libmachine: (ha-313128)       <target dev='hdc' bus='scsi'/>
	I0906 18:50:42.787967   24633 main.go:141] libmachine: (ha-313128)       <readonly/>
	I0906 18:50:42.787977   24633 main.go:141] libmachine: (ha-313128)     </disk>
	I0906 18:50:42.788009   24633 main.go:141] libmachine: (ha-313128)     <disk type='file' device='disk'>
	I0906 18:50:42.788034   24633 main.go:141] libmachine: (ha-313128)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:50:42.788050   24633 main.go:141] libmachine: (ha-313128)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/ha-313128.rawdisk'/>
	I0906 18:50:42.788060   24633 main.go:141] libmachine: (ha-313128)       <target dev='hda' bus='virtio'/>
	I0906 18:50:42.788073   24633 main.go:141] libmachine: (ha-313128)     </disk>
	I0906 18:50:42.788084   24633 main.go:141] libmachine: (ha-313128)     <interface type='network'>
	I0906 18:50:42.788096   24633 main.go:141] libmachine: (ha-313128)       <source network='mk-ha-313128'/>
	I0906 18:50:42.788111   24633 main.go:141] libmachine: (ha-313128)       <model type='virtio'/>
	I0906 18:50:42.788122   24633 main.go:141] libmachine: (ha-313128)     </interface>
	I0906 18:50:42.788132   24633 main.go:141] libmachine: (ha-313128)     <interface type='network'>
	I0906 18:50:42.788142   24633 main.go:141] libmachine: (ha-313128)       <source network='default'/>
	I0906 18:50:42.788151   24633 main.go:141] libmachine: (ha-313128)       <model type='virtio'/>
	I0906 18:50:42.788163   24633 main.go:141] libmachine: (ha-313128)     </interface>
	I0906 18:50:42.788186   24633 main.go:141] libmachine: (ha-313128)     <serial type='pty'>
	I0906 18:50:42.788201   24633 main.go:141] libmachine: (ha-313128)       <target port='0'/>
	I0906 18:50:42.788212   24633 main.go:141] libmachine: (ha-313128)     </serial>
	I0906 18:50:42.788219   24633 main.go:141] libmachine: (ha-313128)     <console type='pty'>
	I0906 18:50:42.788230   24633 main.go:141] libmachine: (ha-313128)       <target type='serial' port='0'/>
	I0906 18:50:42.788242   24633 main.go:141] libmachine: (ha-313128)     </console>
	I0906 18:50:42.788255   24633 main.go:141] libmachine: (ha-313128)     <rng model='virtio'>
	I0906 18:50:42.788267   24633 main.go:141] libmachine: (ha-313128)       <backend model='random'>/dev/random</backend>
	I0906 18:50:42.788281   24633 main.go:141] libmachine: (ha-313128)     </rng>
	I0906 18:50:42.788294   24633 main.go:141] libmachine: (ha-313128)     
	I0906 18:50:42.788304   24633 main.go:141] libmachine: (ha-313128)     
	I0906 18:50:42.788315   24633 main.go:141] libmachine: (ha-313128)   </devices>
	I0906 18:50:42.788322   24633 main.go:141] libmachine: (ha-313128) </domain>
	I0906 18:50:42.788340   24633 main.go:141] libmachine: (ha-313128) 
	I0906 18:50:42.792640   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:1a:9d:87 in network default
	I0906 18:50:42.793247   24633 main.go:141] libmachine: (ha-313128) Ensuring networks are active...
	I0906 18:50:42.793269   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:42.793922   24633 main.go:141] libmachine: (ha-313128) Ensuring network default is active
	I0906 18:50:42.794264   24633 main.go:141] libmachine: (ha-313128) Ensuring network mk-ha-313128 is active
	I0906 18:50:42.794846   24633 main.go:141] libmachine: (ha-313128) Getting domain xml...
	I0906 18:50:42.795607   24633 main.go:141] libmachine: (ha-313128) Creating domain...
	I0906 18:50:43.986213   24633 main.go:141] libmachine: (ha-313128) Waiting to get IP...
	I0906 18:50:43.986898   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:43.987226   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:43.987269   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:43.987214   24656 retry.go:31] will retry after 219.310914ms: waiting for machine to come up
	I0906 18:50:44.208650   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:44.209073   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:44.209112   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:44.209040   24656 retry.go:31] will retry after 263.652423ms: waiting for machine to come up
	I0906 18:50:44.474435   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:44.474934   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:44.474956   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:44.474885   24656 retry.go:31] will retry after 370.076871ms: waiting for machine to come up
	I0906 18:50:44.846380   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:44.846744   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:44.846768   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:44.846717   24656 retry.go:31] will retry after 435.12925ms: waiting for machine to come up
	I0906 18:50:45.283287   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:45.283672   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:45.283696   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:45.283635   24656 retry.go:31] will retry after 719.1692ms: waiting for machine to come up
	I0906 18:50:46.003981   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:46.004393   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:46.004421   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:46.004344   24656 retry.go:31] will retry after 582.927494ms: waiting for machine to come up
	I0906 18:50:46.589175   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:46.589589   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:46.589617   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:46.589541   24656 retry.go:31] will retry after 1.047400336s: waiting for machine to come up
	I0906 18:50:47.638869   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:47.639295   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:47.639322   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:47.639244   24656 retry.go:31] will retry after 959.975477ms: waiting for machine to come up
	I0906 18:50:48.600448   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:48.600911   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:48.600933   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:48.600845   24656 retry.go:31] will retry after 1.819892733s: waiting for machine to come up
	I0906 18:50:50.422074   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:50.422512   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:50.422535   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:50.422470   24656 retry.go:31] will retry after 2.317608626s: waiting for machine to come up
	I0906 18:50:52.741860   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:52.742278   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:52.742300   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:52.742246   24656 retry.go:31] will retry after 1.884163944s: waiting for machine to come up
	I0906 18:50:54.629204   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:54.629610   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:54.629631   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:54.629577   24656 retry.go:31] will retry after 3.296166546s: waiting for machine to come up
	I0906 18:50:57.927315   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:50:57.927722   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:50:57.927749   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:50:57.927670   24656 retry.go:31] will retry after 3.645758109s: waiting for machine to come up
	I0906 18:51:01.577712   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:01.578200   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find current IP address of domain ha-313128 in network mk-ha-313128
	I0906 18:51:01.578229   24633 main.go:141] libmachine: (ha-313128) DBG | I0906 18:51:01.578140   24656 retry.go:31] will retry after 4.942659137s: waiting for machine to come up
	I0906 18:51:06.525967   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.526312   24633 main.go:141] libmachine: (ha-313128) Found IP for machine: 192.168.39.70
	I0906 18:51:06.526338   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has current primary IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.526348   24633 main.go:141] libmachine: (ha-313128) Reserving static IP address...
	I0906 18:51:06.526675   24633 main.go:141] libmachine: (ha-313128) DBG | unable to find host DHCP lease matching {name: "ha-313128", mac: "52:54:00:e1:5d:d2", ip: "192.168.39.70"} in network mk-ha-313128
	I0906 18:51:06.597574   24633 main.go:141] libmachine: (ha-313128) DBG | Getting to WaitForSSH function...
	I0906 18:51:06.597619   24633 main.go:141] libmachine: (ha-313128) Reserved static IP address: 192.168.39.70
	I0906 18:51:06.597635   24633 main.go:141] libmachine: (ha-313128) Waiting for SSH to be available...
	I0906 18:51:06.600248   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.600651   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:06.600679   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.600936   24633 main.go:141] libmachine: (ha-313128) DBG | Using SSH client type: external
	I0906 18:51:06.600961   24633 main.go:141] libmachine: (ha-313128) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa (-rw-------)
	I0906 18:51:06.600988   24633 main.go:141] libmachine: (ha-313128) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.70 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:51:06.601002   24633 main.go:141] libmachine: (ha-313128) DBG | About to run SSH command:
	I0906 18:51:06.601015   24633 main.go:141] libmachine: (ha-313128) DBG | exit 0
	I0906 18:51:06.725154   24633 main.go:141] libmachine: (ha-313128) DBG | SSH cmd err, output: <nil>: 
	I0906 18:51:06.725459   24633 main.go:141] libmachine: (ha-313128) KVM machine creation complete!
	I0906 18:51:06.725772   24633 main.go:141] libmachine: (ha-313128) Calling .GetConfigRaw
	I0906 18:51:06.726286   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:06.726476   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:06.726637   24633 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:51:06.726652   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:06.727819   24633 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:51:06.727834   24633 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:51:06.727842   24633 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:51:06.727848   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:06.730591   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.730983   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:06.731016   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.731117   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:06.731299   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.731441   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.731585   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:06.731762   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:06.731973   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:06.731985   24633 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:51:06.836292   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:51:06.836313   24633 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:51:06.836320   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:06.838996   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.839387   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:06.839420   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.839518   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:06.839727   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.839896   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.840053   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:06.840220   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:06.840381   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:06.840393   24633 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:51:06.949833   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:51:06.949949   24633 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:51:06.949961   24633 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:51:06.949969   24633 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 18:51:06.950241   24633 buildroot.go:166] provisioning hostname "ha-313128"
	I0906 18:51:06.950264   24633 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 18:51:06.950485   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:06.952910   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.953295   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:06.953317   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:06.953488   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:06.953693   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.953840   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:06.954001   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:06.954152   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:06.954332   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:06.954344   24633 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128 && echo "ha-313128" | sudo tee /etc/hostname
	I0906 18:51:07.075964   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 18:51:07.076000   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.078750   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.079086   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.079113   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.079316   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.079484   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.079673   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.079798   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.079962   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:07.080126   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:07.080141   24633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:51:07.193921   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:51:07.193958   24633 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 18:51:07.193989   24633 buildroot.go:174] setting up certificates
	I0906 18:51:07.193999   24633 provision.go:84] configureAuth start
	I0906 18:51:07.194011   24633 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 18:51:07.194348   24633 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:51:07.196926   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.197260   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.197286   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.197450   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.199422   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.199698   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.199717   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.199838   24633 provision.go:143] copyHostCerts
	I0906 18:51:07.199869   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:51:07.199919   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 18:51:07.199937   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:51:07.200019   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 18:51:07.200174   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:51:07.200203   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 18:51:07.200213   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:51:07.200255   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 18:51:07.200340   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:51:07.200363   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 18:51:07.200372   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:51:07.200407   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 18:51:07.200497   24633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128 san=[127.0.0.1 192.168.39.70 ha-313128 localhost minikube]
	I0906 18:51:07.392285   24633 provision.go:177] copyRemoteCerts
	I0906 18:51:07.392342   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:51:07.392362   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.394986   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.395297   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.395325   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.395525   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.395685   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.395819   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.395921   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:07.479623   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 18:51:07.479691   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 18:51:07.505265   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 18:51:07.505334   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:51:07.529872   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 18:51:07.529933   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0906 18:51:07.553374   24633 provision.go:87] duration metric: took 359.361307ms to configureAuth
	I0906 18:51:07.553397   24633 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:51:07.553562   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:07.553623   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.556156   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.556501   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.556527   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.556676   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.556912   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.557048   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.557155   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.557294   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:07.557492   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:07.557512   24633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 18:51:07.787198   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 18:51:07.787231   24633 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:51:07.787242   24633 main.go:141] libmachine: (ha-313128) Calling .GetURL
	I0906 18:51:07.788669   24633 main.go:141] libmachine: (ha-313128) DBG | Using libvirt version 6000000
	I0906 18:51:07.790719   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.791027   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.791057   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.791182   24633 main.go:141] libmachine: Docker is up and running!
	I0906 18:51:07.791202   24633 main.go:141] libmachine: Reticulating splines...
	I0906 18:51:07.791210   24633 client.go:171] duration metric: took 25.463760113s to LocalClient.Create
	I0906 18:51:07.791234   24633 start.go:167] duration metric: took 25.463820367s to libmachine.API.Create "ha-313128"
	I0906 18:51:07.791246   24633 start.go:293] postStartSetup for "ha-313128" (driver="kvm2")
	I0906 18:51:07.791261   24633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:51:07.791279   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:07.791515   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:51:07.791537   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.793579   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.793894   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.793923   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.794060   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.794226   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.794368   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.794495   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:07.880189   24633 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:51:07.885048   24633 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:51:07.885072   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 18:51:07.885149   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 18:51:07.885250   24633 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 18:51:07.885262   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 18:51:07.885376   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 18:51:07.895441   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:51:07.923397   24633 start.go:296] duration metric: took 132.136955ms for postStartSetup
	I0906 18:51:07.923473   24633 main.go:141] libmachine: (ha-313128) Calling .GetConfigRaw
	I0906 18:51:07.924092   24633 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:51:07.926375   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.926621   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.926640   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.926875   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:51:07.927092   24633 start.go:128] duration metric: took 25.617222048s to createHost
	I0906 18:51:07.927113   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:07.929244   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.929555   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:07.929570   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:07.929747   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:07.929945   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.930104   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:07.930251   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:07.930418   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:07.930613   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 18:51:07.930632   24633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:51:08.038105   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725648668.016751149
	
	I0906 18:51:08.038127   24633 fix.go:216] guest clock: 1725648668.016751149
	I0906 18:51:08.038134   24633 fix.go:229] Guest: 2024-09-06 18:51:08.016751149 +0000 UTC Remote: 2024-09-06 18:51:07.927102611 +0000 UTC m=+25.719332215 (delta=89.648538ms)
	I0906 18:51:08.038163   24633 fix.go:200] guest clock delta is within tolerance: 89.648538ms
	I0906 18:51:08.038171   24633 start.go:83] releasing machines lock for "ha-313128", held for 25.728376749s
	I0906 18:51:08.038193   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:08.038444   24633 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:51:08.041444   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.041798   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:08.041826   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.042067   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:08.042545   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:08.042725   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:08.042811   24633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:51:08.042861   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:08.042969   24633 ssh_runner.go:195] Run: cat /version.json
	I0906 18:51:08.043011   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:08.045414   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.045687   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.045772   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:08.045801   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.045943   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:08.046117   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:08.046150   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:08.046174   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:08.046255   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:08.046331   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:08.046388   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:08.046449   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:08.046575   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:08.046710   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:08.130810   24633 ssh_runner.go:195] Run: systemctl --version
	I0906 18:51:08.154651   24633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 18:51:08.313672   24633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:51:08.319900   24633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:51:08.320001   24633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:51:08.337715   24633 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:51:08.337741   24633 start.go:495] detecting cgroup driver to use...
	I0906 18:51:08.337820   24633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:51:08.356242   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:51:08.371685   24633 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:51:08.371740   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:51:08.387728   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:51:08.402690   24633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:51:08.531270   24633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:51:08.703601   24633 docker.go:233] disabling docker service ...
	I0906 18:51:08.703668   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:51:08.718740   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:51:08.731543   24633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:51:08.865160   24633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:51:08.995934   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:51:09.010476   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:51:09.030226   24633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 18:51:09.030288   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.040653   24633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 18:51:09.040759   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.051481   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.061652   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.072907   24633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:51:09.083460   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.093354   24633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.110243   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:09.120642   24633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:51:09.129843   24633 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 18:51:09.129895   24633 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 18:51:09.142908   24633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:51:09.152738   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:51:09.277726   24633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 18:51:09.381806   24633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 18:51:09.381889   24633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 18:51:09.387328   24633 start.go:563] Will wait 60s for crictl version
	I0906 18:51:09.387386   24633 ssh_runner.go:195] Run: which crictl
	I0906 18:51:09.391304   24633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:51:09.431494   24633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 18:51:09.431568   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:51:09.459195   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:51:09.490550   24633 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 18:51:09.491778   24633 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 18:51:09.494246   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:09.494523   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:09.494552   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:09.494788   24633 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:51:09.498999   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:51:09.512390   24633 kubeadm.go:883] updating cluster {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 18:51:09.512493   24633 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:51:09.512534   24633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:51:09.544646   24633 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 18:51:09.544722   24633 ssh_runner.go:195] Run: which lz4
	I0906 18:51:09.548564   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0906 18:51:09.548652   24633 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 18:51:09.552604   24633 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 18:51:09.552630   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 18:51:10.933093   24633 crio.go:462] duration metric: took 1.384461239s to copy over tarball
	I0906 18:51:10.933167   24633 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 18:51:12.961238   24633 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.028040223s)
	I0906 18:51:12.961266   24633 crio.go:469] duration metric: took 2.028146469s to extract the tarball
	I0906 18:51:12.961275   24633 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 18:51:12.998311   24633 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:51:13.045521   24633 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 18:51:13.045548   24633 cache_images.go:84] Images are preloaded, skipping loading
	I0906 18:51:13.045558   24633 kubeadm.go:934] updating node { 192.168.39.70 8443 v1.31.0 crio true true} ...
	I0906 18:51:13.045681   24633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:51:13.045804   24633 ssh_runner.go:195] Run: crio config
	I0906 18:51:13.094877   24633 cni.go:84] Creating CNI manager for ""
	I0906 18:51:13.094895   24633 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0906 18:51:13.094910   24633 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 18:51:13.094932   24633 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-313128 NodeName:ha-313128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 18:51:13.095060   24633 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-313128"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 18:51:13.095095   24633 kube-vip.go:115] generating kube-vip config ...
	I0906 18:51:13.095137   24633 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 18:51:13.117215   24633 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 18:51:13.117347   24633 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0906 18:51:13.117417   24633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:51:13.133450   24633 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 18:51:13.133529   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 18:51:13.143093   24633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 18:51:13.159866   24633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:51:13.175754   24633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0906 18:51:13.192134   24633 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0906 18:51:13.208621   24633 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 18:51:13.212459   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:51:13.224981   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:51:13.349241   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:51:13.367120   24633 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.70
	I0906 18:51:13.367144   24633 certs.go:194] generating shared ca certs ...
	I0906 18:51:13.367163   24633 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.367343   24633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 18:51:13.367415   24633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 18:51:13.367435   24633 certs.go:256] generating profile certs ...
	I0906 18:51:13.367515   24633 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 18:51:13.367534   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt with IP's: []
	I0906 18:51:13.666007   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt ...
	I0906 18:51:13.666050   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt: {Name:mkae10c4a64978657f91d36b765edf2f72d6b208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.666247   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key ...
	I0906 18:51:13.666263   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key: {Name:mk49f39f518303d15b2fb4f8a39da575a917b087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.666354   24633 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.deddd12e
	I0906 18:51:13.666371   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.deddd12e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.254]
	I0906 18:51:13.920406   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.deddd12e ...
	I0906 18:51:13.920433   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.deddd12e: {Name:mk1fa2ba1c8b6fdd0c2c1b723647f82406e8dba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.920583   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.deddd12e ...
	I0906 18:51:13.920595   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.deddd12e: {Name:mk52bb4d4b7d02fab0ab5d4beac0a76ea18ed743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:13.920661   24633 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.deddd12e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 18:51:13.920756   24633 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.deddd12e -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 18:51:13.920815   24633 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 18:51:13.920830   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt with IP's: []
	I0906 18:51:14.002856   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt ...
	I0906 18:51:14.002883   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt: {Name:mk2700e95bb8cfbf5bacfb518b6bf12523e49fbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:14.003026   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key ...
	I0906 18:51:14.003037   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key: {Name:mk668e5ba0da1ad43715dba8fcdf30dc055390cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:14.003116   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 18:51:14.003132   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 18:51:14.003143   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 18:51:14.003156   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 18:51:14.003168   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 18:51:14.003180   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 18:51:14.003192   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 18:51:14.003203   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 18:51:14.003269   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 18:51:14.003305   24633 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 18:51:14.003314   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:51:14.003345   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:51:14.003371   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:51:14.003392   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 18:51:14.003429   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:51:14.003454   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:14.003472   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 18:51:14.003485   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 18:51:14.004036   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:51:14.029711   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 18:51:14.052823   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:51:14.075993   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:51:14.099286   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 18:51:14.125504   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 18:51:14.150019   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:51:14.175178   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:51:14.208805   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:51:14.232695   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 18:51:14.257263   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 18:51:14.281295   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 18:51:14.298392   24633 ssh_runner.go:195] Run: openssl version
	I0906 18:51:14.304447   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:51:14.317138   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:14.322188   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:14.322250   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:14.328420   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:51:14.340736   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 18:51:14.352636   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 18:51:14.357230   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 18:51:14.357297   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 18:51:14.363056   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 18:51:14.375559   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 18:51:14.387857   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 18:51:14.392947   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 18:51:14.393003   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 18:51:14.398952   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 18:51:14.412232   24633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:51:14.416575   24633 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:51:14.416647   24633 kubeadm.go:392] StartCluster: {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:51:14.416759   24633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 18:51:14.416851   24633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 18:51:14.469476   24633 cri.go:89] found id: ""
	I0906 18:51:14.469549   24633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 18:51:14.482583   24633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 18:51:14.493642   24633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 18:51:14.505454   24633 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 18:51:14.505475   24633 kubeadm.go:157] found existing configuration files:
	
	I0906 18:51:14.505526   24633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 18:51:14.515659   24633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 18:51:14.515720   24633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 18:51:14.524992   24633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 18:51:14.534185   24633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 18:51:14.534243   24633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 18:51:14.544106   24633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 18:51:14.553426   24633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 18:51:14.553490   24633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 18:51:14.563381   24633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 18:51:14.573166   24633 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 18:51:14.573231   24633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 18:51:14.582897   24633 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 18:51:14.689370   24633 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 18:51:14.689449   24633 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 18:51:14.797473   24633 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 18:51:14.797608   24633 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 18:51:14.797720   24633 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 18:51:14.807533   24633 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 18:51:14.844917   24633 out.go:235]   - Generating certificates and keys ...
	I0906 18:51:14.845072   24633 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 18:51:14.845162   24633 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 18:51:15.027267   24633 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 18:51:15.311688   24633 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 18:51:15.533807   24633 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 18:51:15.655687   24633 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 18:51:15.914716   24633 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 18:51:15.914964   24633 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-313128 localhost] and IPs [192.168.39.70 127.0.0.1 ::1]
	I0906 18:51:16.269557   24633 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 18:51:16.269748   24633 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-313128 localhost] and IPs [192.168.39.70 127.0.0.1 ::1]
	I0906 18:51:16.524685   24633 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 18:51:16.650845   24633 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 18:51:16.847630   24633 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 18:51:16.847904   24633 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 18:51:17.007883   24633 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 18:51:17.138574   24633 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 18:51:17.419167   24633 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 18:51:17.616983   24633 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 18:51:17.720800   24633 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 18:51:17.721483   24633 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 18:51:17.726904   24633 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 18:51:17.728530   24633 out.go:235]   - Booting up control plane ...
	I0906 18:51:17.728632   24633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 18:51:17.728721   24633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 18:51:17.729028   24633 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 18:51:17.746057   24633 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 18:51:17.755129   24633 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 18:51:17.755253   24633 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 18:51:17.907543   24633 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 18:51:17.907667   24633 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 18:51:18.408740   24633 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.62305ms
	I0906 18:51:18.408831   24633 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 18:51:24.434725   24633 kubeadm.go:310] [api-check] The API server is healthy after 6.026907054s
	I0906 18:51:24.446291   24633 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 18:51:24.468363   24633 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 18:51:25.007118   24633 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 18:51:25.007301   24633 kubeadm.go:310] [mark-control-plane] Marking the node ha-313128 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 18:51:25.023020   24633 kubeadm.go:310] [bootstrap-token] Using token: xmh4ax.y6lhpiqw6s4v24x2
	I0906 18:51:25.024167   24633 out.go:235]   - Configuring RBAC rules ...
	I0906 18:51:25.024318   24633 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 18:51:25.031086   24633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 18:51:25.042621   24633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 18:51:25.047120   24633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 18:51:25.051473   24633 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 18:51:25.058411   24633 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 18:51:25.073097   24633 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 18:51:25.321353   24633 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 18:51:25.842136   24633 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 18:51:25.843034   24633 kubeadm.go:310] 
	I0906 18:51:25.843096   24633 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 18:51:25.843124   24633 kubeadm.go:310] 
	I0906 18:51:25.843227   24633 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 18:51:25.843241   24633 kubeadm.go:310] 
	I0906 18:51:25.843276   24633 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 18:51:25.843338   24633 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 18:51:25.843402   24633 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 18:51:25.843415   24633 kubeadm.go:310] 
	I0906 18:51:25.843467   24633 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 18:51:25.843477   24633 kubeadm.go:310] 
	I0906 18:51:25.843536   24633 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 18:51:25.843560   24633 kubeadm.go:310] 
	I0906 18:51:25.843646   24633 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 18:51:25.843753   24633 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 18:51:25.843839   24633 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 18:51:25.843848   24633 kubeadm.go:310] 
	I0906 18:51:25.843949   24633 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 18:51:25.844050   24633 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 18:51:25.844074   24633 kubeadm.go:310] 
	I0906 18:51:25.844200   24633 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xmh4ax.y6lhpiqw6s4v24x2 \
	I0906 18:51:25.844323   24633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 18:51:25.844354   24633 kubeadm.go:310] 	--control-plane 
	I0906 18:51:25.844363   24633 kubeadm.go:310] 
	I0906 18:51:25.844466   24633 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 18:51:25.844477   24633 kubeadm.go:310] 
	I0906 18:51:25.844580   24633 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xmh4ax.y6lhpiqw6s4v24x2 \
	I0906 18:51:25.844727   24633 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 18:51:25.845538   24633 kubeadm.go:310] W0906 18:51:14.671355     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:51:25.845883   24633 kubeadm.go:310] W0906 18:51:14.672130     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:51:25.846046   24633 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 18:51:25.846079   24633 cni.go:84] Creating CNI manager for ""
	I0906 18:51:25.846092   24633 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0906 18:51:25.848400   24633 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 18:51:25.849478   24633 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 18:51:25.856686   24633 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0906 18:51:25.856705   24633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0906 18:51:25.885689   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 18:51:26.237198   24633 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 18:51:26.237259   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:26.237284   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-313128 minikube.k8s.io/updated_at=2024_09_06T18_51_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=ha-313128 minikube.k8s.io/primary=true
	I0906 18:51:26.382196   24633 ops.go:34] apiserver oom_adj: -16
	I0906 18:51:26.382349   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:26.882958   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:27.383065   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:27.882971   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:28.382740   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:28.883392   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:29.382768   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:51:29.503653   24633 kubeadm.go:1113] duration metric: took 3.266449086s to wait for elevateKubeSystemPrivileges
	I0906 18:51:29.503690   24633 kubeadm.go:394] duration metric: took 15.087047227s to StartCluster
	I0906 18:51:29.503707   24633 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:29.503798   24633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:51:29.504429   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:29.504705   24633 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:51:29.504727   24633 start.go:241] waiting for startup goroutines ...
	I0906 18:51:29.504725   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 18:51:29.504740   24633 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 18:51:29.504806   24633 addons.go:69] Setting storage-provisioner=true in profile "ha-313128"
	I0906 18:51:29.504826   24633 addons.go:69] Setting default-storageclass=true in profile "ha-313128"
	I0906 18:51:29.504887   24633 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-313128"
	I0906 18:51:29.504835   24633 addons.go:234] Setting addon storage-provisioner=true in "ha-313128"
	I0906 18:51:29.504973   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:29.504982   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:51:29.505365   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.505367   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.505413   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.505482   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.521255   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I0906 18:51:29.521305   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0906 18:51:29.521798   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.521805   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.522305   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.522326   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.522450   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.522466   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.522684   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.522816   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.522985   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:29.523214   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.523252   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.525805   24633 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:51:29.526131   24633 kapi.go:59] client config for ha-313128: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt", KeyFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key", CAFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 18:51:29.526682   24633 cert_rotation.go:140] Starting client certificate rotation controller
	I0906 18:51:29.526931   24633 addons.go:234] Setting addon default-storageclass=true in "ha-313128"
	I0906 18:51:29.526969   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:51:29.527324   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.527352   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.539024   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0906 18:51:29.539465   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.539994   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.540020   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.540341   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.540550   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:29.542534   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:29.542690   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0906 18:51:29.543008   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.543404   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.543426   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.543779   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.544220   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:29.544255   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:29.544434   24633 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 18:51:29.545561   24633 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:51:29.545581   24633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 18:51:29.545600   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:29.548949   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:29.549378   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:29.549398   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:29.549581   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:29.549767   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:29.549924   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:29.550085   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:29.559362   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0906 18:51:29.559803   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:29.560281   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:29.560305   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:29.560567   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:29.560736   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:29.562285   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:29.562497   24633 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 18:51:29.562513   24633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 18:51:29.562526   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:29.564874   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:29.565298   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:29.565326   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:29.565482   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:29.565661   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:29.565799   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:29.565951   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:29.636220   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 18:51:29.682123   24633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:51:29.697841   24633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:51:30.246142   24633 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0906 18:51:30.545152   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.545180   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.545152   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.545250   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.545507   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.545526   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.545536   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.545544   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.545564   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.545583   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.545596   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.545606   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.545834   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.545850   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.545848   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.545865   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.545912   24633 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 18:51:30.545932   24633 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 18:51:30.546044   24633 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0906 18:51:30.546058   24633 round_trippers.go:469] Request Headers:
	I0906 18:51:30.546068   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:51:30.546078   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:51:30.568360   24633 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0906 18:51:30.569153   24633 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0906 18:51:30.569169   24633 round_trippers.go:469] Request Headers:
	I0906 18:51:30.569177   24633 round_trippers.go:473]     Content-Type: application/json
	I0906 18:51:30.569182   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:51:30.569186   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:51:30.577205   24633 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0906 18:51:30.577354   24633 main.go:141] libmachine: Making call to close driver server
	I0906 18:51:30.577370   24633 main.go:141] libmachine: (ha-313128) Calling .Close
	I0906 18:51:30.577655   24633 main.go:141] libmachine: Successfully made call to close driver server
	I0906 18:51:30.577666   24633 main.go:141] libmachine: (ha-313128) DBG | Closing plugin on server side
	I0906 18:51:30.577675   24633 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 18:51:30.579205   24633 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0906 18:51:30.580206   24633 addons.go:510] duration metric: took 1.075470312s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0906 18:51:30.580235   24633 start.go:246] waiting for cluster config update ...
	I0906 18:51:30.580249   24633 start.go:255] writing updated cluster config ...
	I0906 18:51:30.581657   24633 out.go:201] 
	I0906 18:51:30.582779   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:30.582837   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:51:30.584221   24633 out.go:177] * Starting "ha-313128-m02" control-plane node in "ha-313128" cluster
	I0906 18:51:30.585121   24633 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:51:30.585141   24633 cache.go:56] Caching tarball of preloaded images
	I0906 18:51:30.585214   24633 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 18:51:30.585225   24633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 18:51:30.585293   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:51:30.585489   24633 start.go:360] acquireMachinesLock for ha-313128-m02: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:51:30.585536   24633 start.go:364] duration metric: took 23.513µs to acquireMachinesLock for "ha-313128-m02"
	I0906 18:51:30.585560   24633 start.go:93] Provisioning new machine with config: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:51:30.585620   24633 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0906 18:51:30.586903   24633 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 18:51:30.586986   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:30.587016   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:30.601355   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0906 18:51:30.601697   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:30.602152   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:30.602171   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:30.602432   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:30.602646   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetMachineName
	I0906 18:51:30.602806   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:30.602963   24633 start.go:159] libmachine.API.Create for "ha-313128" (driver="kvm2")
	I0906 18:51:30.602985   24633 client.go:168] LocalClient.Create starting
	I0906 18:51:30.603023   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 18:51:30.603060   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:51:30.603080   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:51:30.603143   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 18:51:30.603170   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:51:30.603183   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:51:30.603207   24633 main.go:141] libmachine: Running pre-create checks...
	I0906 18:51:30.603219   24633 main.go:141] libmachine: (ha-313128-m02) Calling .PreCreateCheck
	I0906 18:51:30.603399   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetConfigRaw
	I0906 18:51:30.603766   24633 main.go:141] libmachine: Creating machine...
	I0906 18:51:30.603784   24633 main.go:141] libmachine: (ha-313128-m02) Calling .Create
	I0906 18:51:30.603911   24633 main.go:141] libmachine: (ha-313128-m02) Creating KVM machine...
	I0906 18:51:30.605134   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found existing default KVM network
	I0906 18:51:30.605228   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found existing private KVM network mk-ha-313128
	I0906 18:51:30.605381   24633 main.go:141] libmachine: (ha-313128-m02) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02 ...
	I0906 18:51:30.605407   24633 main.go:141] libmachine: (ha-313128-m02) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:51:30.605447   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:30.605351   25019 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:51:30.605536   24633 main.go:141] libmachine: (ha-313128-m02) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:51:30.830840   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:30.830729   25019 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa...
	I0906 18:51:31.129668   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:31.129563   25019 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/ha-313128-m02.rawdisk...
	I0906 18:51:31.129699   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Writing magic tar header
	I0906 18:51:31.129714   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Writing SSH key tar header
	I0906 18:51:31.129722   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:31.129672   25019 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02 ...
	I0906 18:51:31.129811   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02
	I0906 18:51:31.129849   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02 (perms=drwx------)
	I0906 18:51:31.129864   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 18:51:31.129875   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:51:31.129891   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:51:31.129901   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 18:51:31.129911   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 18:51:31.129929   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:51:31.129942   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:51:31.129956   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 18:51:31.129970   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:51:31.129981   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Checking permissions on dir: /home
	I0906 18:51:31.129991   24633 main.go:141] libmachine: (ha-313128-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:51:31.130005   24633 main.go:141] libmachine: (ha-313128-m02) Creating domain...
	I0906 18:51:31.130018   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Skipping /home - not owner
	I0906 18:51:31.131009   24633 main.go:141] libmachine: (ha-313128-m02) define libvirt domain using xml: 
	I0906 18:51:31.131029   24633 main.go:141] libmachine: (ha-313128-m02) <domain type='kvm'>
	I0906 18:51:31.131039   24633 main.go:141] libmachine: (ha-313128-m02)   <name>ha-313128-m02</name>
	I0906 18:51:31.131047   24633 main.go:141] libmachine: (ha-313128-m02)   <memory unit='MiB'>2200</memory>
	I0906 18:51:31.131056   24633 main.go:141] libmachine: (ha-313128-m02)   <vcpu>2</vcpu>
	I0906 18:51:31.131067   24633 main.go:141] libmachine: (ha-313128-m02)   <features>
	I0906 18:51:31.131077   24633 main.go:141] libmachine: (ha-313128-m02)     <acpi/>
	I0906 18:51:31.131087   24633 main.go:141] libmachine: (ha-313128-m02)     <apic/>
	I0906 18:51:31.131096   24633 main.go:141] libmachine: (ha-313128-m02)     <pae/>
	I0906 18:51:31.131107   24633 main.go:141] libmachine: (ha-313128-m02)     
	I0906 18:51:31.131117   24633 main.go:141] libmachine: (ha-313128-m02)   </features>
	I0906 18:51:31.131130   24633 main.go:141] libmachine: (ha-313128-m02)   <cpu mode='host-passthrough'>
	I0906 18:51:31.131142   24633 main.go:141] libmachine: (ha-313128-m02)   
	I0906 18:51:31.131152   24633 main.go:141] libmachine: (ha-313128-m02)   </cpu>
	I0906 18:51:31.131169   24633 main.go:141] libmachine: (ha-313128-m02)   <os>
	I0906 18:51:31.131178   24633 main.go:141] libmachine: (ha-313128-m02)     <type>hvm</type>
	I0906 18:51:31.131188   24633 main.go:141] libmachine: (ha-313128-m02)     <boot dev='cdrom'/>
	I0906 18:51:31.131199   24633 main.go:141] libmachine: (ha-313128-m02)     <boot dev='hd'/>
	I0906 18:51:31.131212   24633 main.go:141] libmachine: (ha-313128-m02)     <bootmenu enable='no'/>
	I0906 18:51:31.131222   24633 main.go:141] libmachine: (ha-313128-m02)   </os>
	I0906 18:51:31.131233   24633 main.go:141] libmachine: (ha-313128-m02)   <devices>
	I0906 18:51:31.131245   24633 main.go:141] libmachine: (ha-313128-m02)     <disk type='file' device='cdrom'>
	I0906 18:51:31.131262   24633 main.go:141] libmachine: (ha-313128-m02)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/boot2docker.iso'/>
	I0906 18:51:31.131273   24633 main.go:141] libmachine: (ha-313128-m02)       <target dev='hdc' bus='scsi'/>
	I0906 18:51:31.131284   24633 main.go:141] libmachine: (ha-313128-m02)       <readonly/>
	I0906 18:51:31.131300   24633 main.go:141] libmachine: (ha-313128-m02)     </disk>
	I0906 18:51:31.131314   24633 main.go:141] libmachine: (ha-313128-m02)     <disk type='file' device='disk'>
	I0906 18:51:31.131327   24633 main.go:141] libmachine: (ha-313128-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:51:31.131348   24633 main.go:141] libmachine: (ha-313128-m02)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/ha-313128-m02.rawdisk'/>
	I0906 18:51:31.131360   24633 main.go:141] libmachine: (ha-313128-m02)       <target dev='hda' bus='virtio'/>
	I0906 18:51:31.131372   24633 main.go:141] libmachine: (ha-313128-m02)     </disk>
	I0906 18:51:31.131383   24633 main.go:141] libmachine: (ha-313128-m02)     <interface type='network'>
	I0906 18:51:31.131393   24633 main.go:141] libmachine: (ha-313128-m02)       <source network='mk-ha-313128'/>
	I0906 18:51:31.131404   24633 main.go:141] libmachine: (ha-313128-m02)       <model type='virtio'/>
	I0906 18:51:31.131414   24633 main.go:141] libmachine: (ha-313128-m02)     </interface>
	I0906 18:51:31.131425   24633 main.go:141] libmachine: (ha-313128-m02)     <interface type='network'>
	I0906 18:51:31.131436   24633 main.go:141] libmachine: (ha-313128-m02)       <source network='default'/>
	I0906 18:51:31.131446   24633 main.go:141] libmachine: (ha-313128-m02)       <model type='virtio'/>
	I0906 18:51:31.131458   24633 main.go:141] libmachine: (ha-313128-m02)     </interface>
	I0906 18:51:31.131470   24633 main.go:141] libmachine: (ha-313128-m02)     <serial type='pty'>
	I0906 18:51:31.131482   24633 main.go:141] libmachine: (ha-313128-m02)       <target port='0'/>
	I0906 18:51:31.131490   24633 main.go:141] libmachine: (ha-313128-m02)     </serial>
	I0906 18:51:31.131503   24633 main.go:141] libmachine: (ha-313128-m02)     <console type='pty'>
	I0906 18:51:31.131514   24633 main.go:141] libmachine: (ha-313128-m02)       <target type='serial' port='0'/>
	I0906 18:51:31.131526   24633 main.go:141] libmachine: (ha-313128-m02)     </console>
	I0906 18:51:31.131538   24633 main.go:141] libmachine: (ha-313128-m02)     <rng model='virtio'>
	I0906 18:51:31.131550   24633 main.go:141] libmachine: (ha-313128-m02)       <backend model='random'>/dev/random</backend>
	I0906 18:51:31.131560   24633 main.go:141] libmachine: (ha-313128-m02)     </rng>
	I0906 18:51:31.131571   24633 main.go:141] libmachine: (ha-313128-m02)     
	I0906 18:51:31.131581   24633 main.go:141] libmachine: (ha-313128-m02)     
	I0906 18:51:31.131590   24633 main.go:141] libmachine: (ha-313128-m02)   </devices>
	I0906 18:51:31.131600   24633 main.go:141] libmachine: (ha-313128-m02) </domain>
	I0906 18:51:31.131613   24633 main.go:141] libmachine: (ha-313128-m02) 
	I0906 18:51:31.137934   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:d1:48:14 in network default
	I0906 18:51:31.138539   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:31.138557   24633 main.go:141] libmachine: (ha-313128-m02) Ensuring networks are active...
	I0906 18:51:31.139314   24633 main.go:141] libmachine: (ha-313128-m02) Ensuring network default is active
	I0906 18:51:31.139633   24633 main.go:141] libmachine: (ha-313128-m02) Ensuring network mk-ha-313128 is active
	I0906 18:51:31.140092   24633 main.go:141] libmachine: (ha-313128-m02) Getting domain xml...
	I0906 18:51:31.140875   24633 main.go:141] libmachine: (ha-313128-m02) Creating domain...
	I0906 18:51:32.393306   24633 main.go:141] libmachine: (ha-313128-m02) Waiting to get IP...
	I0906 18:51:32.394205   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:32.394523   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:32.394578   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:32.394517   25019 retry.go:31] will retry after 288.850488ms: waiting for machine to come up
	I0906 18:51:32.685225   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:32.685717   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:32.685746   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:32.685671   25019 retry.go:31] will retry after 282.043787ms: waiting for machine to come up
	I0906 18:51:32.969192   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:32.969632   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:32.969658   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:32.969600   25019 retry.go:31] will retry after 363.032435ms: waiting for machine to come up
	I0906 18:51:33.334308   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:33.334785   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:33.334822   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:33.334744   25019 retry.go:31] will retry after 422.058707ms: waiting for machine to come up
	I0906 18:51:33.757898   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:33.758279   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:33.758308   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:33.758233   25019 retry.go:31] will retry after 503.499024ms: waiting for machine to come up
	I0906 18:51:34.262906   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:34.263257   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:34.263285   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:34.263218   25019 retry.go:31] will retry after 689.475949ms: waiting for machine to come up
	I0906 18:51:34.954115   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:34.954716   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:34.954751   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:34.954662   25019 retry.go:31] will retry after 1.00434144s: waiting for machine to come up
	I0906 18:51:35.960231   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:35.960587   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:35.960610   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:35.960542   25019 retry.go:31] will retry after 1.05804784s: waiting for machine to come up
	I0906 18:51:37.020099   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:37.020571   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:37.020599   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:37.020520   25019 retry.go:31] will retry after 1.215751027s: waiting for machine to come up
	I0906 18:51:38.238034   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:38.238501   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:38.238524   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:38.238453   25019 retry.go:31] will retry after 1.44067495s: waiting for machine to come up
	I0906 18:51:39.681354   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:39.681813   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:39.681848   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:39.681767   25019 retry.go:31] will retry after 2.063449934s: waiting for machine to come up
	I0906 18:51:41.746930   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:41.747407   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:41.747437   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:41.747360   25019 retry.go:31] will retry after 2.803466893s: waiting for machine to come up
	I0906 18:51:44.554086   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:44.554574   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:44.554608   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:44.554520   25019 retry.go:31] will retry after 2.881675176s: waiting for machine to come up
	I0906 18:51:47.439208   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:47.439722   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find current IP address of domain ha-313128-m02 in network mk-ha-313128
	I0906 18:51:47.439751   24633 main.go:141] libmachine: (ha-313128-m02) DBG | I0906 18:51:47.439671   25019 retry.go:31] will retry after 5.083573314s: waiting for machine to come up
	I0906 18:51:52.525650   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.526025   24633 main.go:141] libmachine: (ha-313128-m02) Found IP for machine: 192.168.39.32
	I0906 18:51:52.526054   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has current primary IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.526065   24633 main.go:141] libmachine: (ha-313128-m02) Reserving static IP address...
	I0906 18:51:52.526419   24633 main.go:141] libmachine: (ha-313128-m02) DBG | unable to find host DHCP lease matching {name: "ha-313128-m02", mac: "52:54:00:0d:cf:ee", ip: "192.168.39.32"} in network mk-ha-313128
	I0906 18:51:52.598045   24633 main.go:141] libmachine: (ha-313128-m02) Reserved static IP address: 192.168.39.32
	I0906 18:51:52.598073   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Getting to WaitForSSH function...
	I0906 18:51:52.598081   24633 main.go:141] libmachine: (ha-313128-m02) Waiting for SSH to be available...
	I0906 18:51:52.601206   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.601738   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:52.601772   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.601998   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Using SSH client type: external
	I0906 18:51:52.602018   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa (-rw-------)
	I0906 18:51:52.602046   24633 main.go:141] libmachine: (ha-313128-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:51:52.602063   24633 main.go:141] libmachine: (ha-313128-m02) DBG | About to run SSH command:
	I0906 18:51:52.602077   24633 main.go:141] libmachine: (ha-313128-m02) DBG | exit 0
	I0906 18:51:52.725210   24633 main.go:141] libmachine: (ha-313128-m02) DBG | SSH cmd err, output: <nil>: 
	I0906 18:51:52.725524   24633 main.go:141] libmachine: (ha-313128-m02) KVM machine creation complete!
	I0906 18:51:52.725862   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetConfigRaw
	I0906 18:51:52.726391   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:52.726578   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:52.726731   24633 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:51:52.726744   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 18:51:52.728072   24633 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:51:52.728091   24633 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:51:52.728097   24633 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:51:52.728102   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:52.730282   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.730625   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:52.730651   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.730811   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:52.730997   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.731151   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.731277   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:52.731420   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:52.731665   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:52.731682   24633 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:51:52.832298   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:51:52.832322   24633 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:51:52.832332   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:52.834998   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.835332   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:52.835360   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.835465   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:52.835700   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.835842   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.835968   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:52.836089   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:52.836237   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:52.836247   24633 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:51:52.937651   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:51:52.937719   24633 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:51:52.937727   24633 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:51:52.937740   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetMachineName
	I0906 18:51:52.937971   24633 buildroot.go:166] provisioning hostname "ha-313128-m02"
	I0906 18:51:52.937987   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetMachineName
	I0906 18:51:52.938117   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:52.941041   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.941365   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:52.941394   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:52.941540   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:52.941708   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.941883   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:52.942006   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:52.942155   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:52.942360   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:52.942378   24633 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128-m02 && echo "ha-313128-m02" | sudo tee /etc/hostname
	I0906 18:51:53.057183   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128-m02
	
	I0906 18:51:53.057211   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.059810   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.060143   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.060164   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.060345   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:53.060534   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.060718   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.060892   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:53.061063   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:53.061257   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:53.061274   24633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:51:53.170161   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:51:53.170199   24633 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 18:51:53.170219   24633 buildroot.go:174] setting up certificates
	I0906 18:51:53.170258   24633 provision.go:84] configureAuth start
	I0906 18:51:53.170278   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetMachineName
	I0906 18:51:53.170577   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:51:53.173163   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.173558   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.173587   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.173768   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.175952   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.176269   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.176296   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.176392   24633 provision.go:143] copyHostCerts
	I0906 18:51:53.176419   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:51:53.176452   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 18:51:53.176463   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:51:53.176527   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 18:51:53.176624   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:51:53.176649   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 18:51:53.176655   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:51:53.176691   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 18:51:53.176755   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:51:53.176779   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 18:51:53.176786   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:51:53.176826   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 18:51:53.176916   24633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128-m02 san=[127.0.0.1 192.168.39.32 ha-313128-m02 localhost minikube]
	I0906 18:51:53.531978   24633 provision.go:177] copyRemoteCerts
	I0906 18:51:53.532031   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:51:53.532055   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.534641   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.534972   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.534999   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.535174   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:53.535400   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.535565   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:53.535703   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 18:51:53.615451   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 18:51:53.615533   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:51:53.641667   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 18:51:53.641759   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:51:53.669096   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 18:51:53.669179   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 18:51:53.695612   24633 provision.go:87] duration metric: took 525.337896ms to configureAuth
	I0906 18:51:53.695645   24633 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:51:53.695825   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:53.695887   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.698363   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.698782   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.698810   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.698997   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:53.699207   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.699366   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.699522   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:53.699716   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:53.699901   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:53.699924   24633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 18:51:53.915727   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 18:51:53.915756   24633 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:51:53.915775   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetURL
	I0906 18:51:53.917175   24633 main.go:141] libmachine: (ha-313128-m02) DBG | Using libvirt version 6000000
	I0906 18:51:53.919363   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.919721   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.919762   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.919875   24633 main.go:141] libmachine: Docker is up and running!
	I0906 18:51:53.919894   24633 main.go:141] libmachine: Reticulating splines...
	I0906 18:51:53.919901   24633 client.go:171] duration metric: took 23.31690762s to LocalClient.Create
	I0906 18:51:53.919925   24633 start.go:167] duration metric: took 23.316961673s to libmachine.API.Create "ha-313128"
	I0906 18:51:53.919943   24633 start.go:293] postStartSetup for "ha-313128-m02" (driver="kvm2")
	I0906 18:51:53.919959   24633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:51:53.919977   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:53.920221   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:51:53.920243   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:53.922141   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.922443   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:53.922468   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:53.922586   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:53.922753   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:53.922903   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:53.923033   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 18:51:54.007879   24633 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:51:54.012541   24633 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:51:54.012572   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 18:51:54.012633   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 18:51:54.012700   24633 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 18:51:54.012709   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 18:51:54.012788   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 18:51:54.022295   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:51:54.048093   24633 start.go:296] duration metric: took 128.135633ms for postStartSetup
	I0906 18:51:54.048145   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetConfigRaw
	I0906 18:51:54.048680   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:51:54.051341   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.051693   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.051719   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.051982   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:51:54.052393   24633 start.go:128] duration metric: took 23.466754043s to createHost
	I0906 18:51:54.052441   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:54.054574   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.054926   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.054949   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.055147   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:54.055327   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:54.055604   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:54.055746   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:54.055907   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:51:54.056109   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0906 18:51:54.056121   24633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:51:54.158010   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725648714.116348320
	
	I0906 18:51:54.158037   24633 fix.go:216] guest clock: 1725648714.116348320
	I0906 18:51:54.158048   24633 fix.go:229] Guest: 2024-09-06 18:51:54.11634832 +0000 UTC Remote: 2024-09-06 18:51:54.052421453 +0000 UTC m=+71.844651063 (delta=63.926867ms)
	I0906 18:51:54.158071   24633 fix.go:200] guest clock delta is within tolerance: 63.926867ms
	I0906 18:51:54.158081   24633 start.go:83] releasing machines lock for "ha-313128-m02", held for 23.572533563s
	I0906 18:51:54.158106   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:54.158351   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:51:54.160983   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.161491   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.161519   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.163916   24633 out.go:177] * Found network options:
	I0906 18:51:54.165233   24633 out.go:177]   - NO_PROXY=192.168.39.70
	W0906 18:51:54.166526   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 18:51:54.166557   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:54.167095   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:54.167291   24633 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 18:51:54.167372   24633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:51:54.167411   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	W0906 18:51:54.167495   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 18:51:54.167570   24633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 18:51:54.167592   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 18:51:54.170184   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.170377   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.170565   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.170590   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.170805   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:54.170809   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:54.170831   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:54.170975   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:54.170979   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 18:51:54.171132   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 18:51:54.171134   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:54.171326   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 18:51:54.171327   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 18:51:54.171456   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 18:51:54.400160   24633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:51:54.407055   24633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:51:54.407111   24633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:51:54.425130   24633 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:51:54.425152   24633 start.go:495] detecting cgroup driver to use...
	I0906 18:51:54.425239   24633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:51:54.442658   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:51:54.457602   24633 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:51:54.457666   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:51:54.472644   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:51:54.487290   24633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:51:54.602638   24633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:51:54.769543   24633 docker.go:233] disabling docker service ...
	I0906 18:51:54.769604   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:51:54.784508   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:51:54.799154   24633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:51:54.927422   24633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:51:55.048008   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:51:55.062937   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:51:55.083211   24633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 18:51:55.083270   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.094129   24633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 18:51:55.094193   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.104791   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.116503   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.126980   24633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:51:55.138550   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.149446   24633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.167080   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:51:55.178377   24633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:51:55.187946   24633 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 18:51:55.188002   24633 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 18:51:55.203527   24633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:51:55.222751   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:51:55.340905   24633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 18:51:55.431581   24633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 18:51:55.431646   24633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 18:51:55.436404   24633 start.go:563] Will wait 60s for crictl version
	I0906 18:51:55.436485   24633 ssh_runner.go:195] Run: which crictl
	I0906 18:51:55.440395   24633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:51:55.481607   24633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 18:51:55.481694   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:51:55.512073   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:51:55.540712   24633 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 18:51:55.541928   24633 out.go:177]   - env NO_PROXY=192.168.39.70
	I0906 18:51:55.542984   24633 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 18:51:55.546063   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:55.546500   24633 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:51:45 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 18:51:55.546525   24633 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 18:51:55.546782   24633 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:51:55.551222   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:51:55.563782   24633 mustload.go:65] Loading cluster: ha-313128
	I0906 18:51:55.564006   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:51:55.564375   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:55.564406   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:55.579244   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0906 18:51:55.579765   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:55.580261   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:55.580287   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:55.580605   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:55.580771   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:51:55.582340   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:51:55.582738   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:55.582769   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:55.598072   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0906 18:51:55.598492   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:55.598909   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:55.598929   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:55.599284   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:55.599472   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:55.599640   24633 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.32
	I0906 18:51:55.599649   24633 certs.go:194] generating shared ca certs ...
	I0906 18:51:55.599664   24633 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:55.599777   24633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 18:51:55.599812   24633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 18:51:55.599821   24633 certs.go:256] generating profile certs ...
	I0906 18:51:55.599884   24633 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 18:51:55.599908   24633 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.45734e05
	I0906 18:51:55.599923   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.45734e05 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.32 192.168.39.254]
	I0906 18:51:55.664204   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.45734e05 ...
	I0906 18:51:55.664233   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.45734e05: {Name:mkb4a2e0ab1ba114f51a63da71c5c0ab5250a4f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:55.664415   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.45734e05 ...
	I0906 18:51:55.664439   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.45734e05: {Name:mkf05835fddfb31126cf809ae0a4fed25c679c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:51:55.664566   24633 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.45734e05 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 18:51:55.664699   24633 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.45734e05 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 18:51:55.664816   24633 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 18:51:55.664844   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 18:51:55.664883   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 18:51:55.664914   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 18:51:55.664933   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 18:51:55.664951   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 18:51:55.664969   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 18:51:55.664986   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 18:51:55.665000   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 18:51:55.665050   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 18:51:55.665085   24633 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 18:51:55.665094   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:51:55.665116   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:51:55.665148   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:51:55.665189   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 18:51:55.665244   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:51:55.665288   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 18:51:55.665309   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:55.665327   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 18:51:55.665369   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:55.668143   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:55.668470   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:55.668491   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:55.668681   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:55.668886   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:55.669057   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:55.669166   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:55.745232   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 18:51:55.751412   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 18:51:55.765984   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 18:51:55.770489   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 18:51:55.782003   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 18:51:55.786857   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 18:51:55.798862   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 18:51:55.803225   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0906 18:51:55.813358   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 18:51:55.817418   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 18:51:55.827594   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 18:51:55.831544   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0906 18:51:55.843360   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:51:55.869870   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 18:51:55.894969   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:51:55.919286   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:51:55.944458   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0906 18:51:55.968696   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 18:51:55.992704   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:51:56.015928   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:51:56.038934   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 18:51:56.062758   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:51:56.086178   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 18:51:56.109157   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 18:51:56.126213   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 18:51:56.144980   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 18:51:56.163980   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0906 18:51:56.181686   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 18:51:56.200170   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0906 18:51:56.217739   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 18:51:56.236591   24633 ssh_runner.go:195] Run: openssl version
	I0906 18:51:56.242674   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:51:56.254908   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:56.259760   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:56.259809   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:51:56.266372   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:51:56.277202   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 18:51:56.288013   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 18:51:56.292440   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 18:51:56.292490   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 18:51:56.298189   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 18:51:56.308729   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 18:51:56.319322   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 18:51:56.323443   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 18:51:56.323486   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 18:51:56.328874   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 18:51:56.339327   24633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:51:56.343147   24633 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:51:56.343199   24633 kubeadm.go:934] updating node {m02 192.168.39.32 8443 v1.31.0 crio true true} ...
	I0906 18:51:56.343297   24633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:51:56.343324   24633 kube-vip.go:115] generating kube-vip config ...
	I0906 18:51:56.343360   24633 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 18:51:56.360229   24633 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 18:51:56.360317   24633 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 18:51:56.360373   24633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:51:56.370531   24633 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0906 18:51:56.370590   24633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0906 18:51:56.379939   24633 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0906 18:51:56.379974   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0906 18:51:56.380040   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0906 18:51:56.380051   24633 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0906 18:51:56.380081   24633 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0906 18:51:56.384231   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0906 18:51:56.384260   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0906 18:51:56.986596   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0906 18:51:56.986724   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0906 18:51:56.992796   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0906 18:51:56.992827   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0906 18:51:57.271779   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:51:57.287745   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0906 18:51:57.287836   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0906 18:51:57.293546   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0906 18:51:57.293586   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0906 18:51:57.620524   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 18:51:57.629974   24633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 18:51:57.646374   24633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:51:57.662738   24633 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0906 18:51:57.679087   24633 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 18:51:57.682857   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:51:57.695646   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:51:57.820090   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:51:57.837441   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:51:57.837817   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:51:57.837860   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:51:57.852429   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0906 18:51:57.852901   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:51:57.853376   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:51:57.853397   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:51:57.853713   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:51:57.853917   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:51:57.854070   24633 start.go:317] joinCluster: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.16
8.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:51:57.854195   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0906 18:51:57.854218   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:51:57.857048   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:57.857524   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:51:57.857553   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:51:57.857782   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:51:57.857955   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:51:57.858104   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:51:57.858241   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:51:58.001758   24633 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:51:58.001809   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token emqixv.kkhhq8mwvy4cltk9 --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-313128-m02 --control-plane --apiserver-advertise-address=192.168.39.32 --apiserver-bind-port=8443"
	I0906 18:52:20.534856   24633 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token emqixv.kkhhq8mwvy4cltk9 --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-313128-m02 --control-plane --apiserver-advertise-address=192.168.39.32 --apiserver-bind-port=8443": (22.533021448s)
	I0906 18:52:20.534908   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0906 18:52:21.036721   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-313128-m02 minikube.k8s.io/updated_at=2024_09_06T18_52_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=ha-313128 minikube.k8s.io/primary=false
	I0906 18:52:21.144223   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-313128-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0906 18:52:21.236895   24633 start.go:319] duration metric: took 23.382822757s to joinCluster
	I0906 18:52:21.237034   24633 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:52:21.237311   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:52:21.238436   24633 out.go:177] * Verifying Kubernetes components...
	I0906 18:52:21.239623   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:52:21.453669   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:52:21.475521   24633 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:52:21.475854   24633 kapi.go:59] client config for ha-313128: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt", KeyFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key", CAFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 18:52:21.475946   24633 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.70:8443
	I0906 18:52:21.476228   24633 node_ready.go:35] waiting up to 6m0s for node "ha-313128-m02" to be "Ready" ...
	I0906 18:52:21.476348   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:21.476360   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:21.476371   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:21.476381   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:21.499552   24633 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0906 18:52:21.976507   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:21.976533   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:21.976545   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:21.976552   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:21.985880   24633 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0906 18:52:22.476771   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:22.476796   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:22.476808   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:22.476815   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:22.514723   24633 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I0906 18:52:22.976806   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:22.976831   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:22.976843   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:22.976848   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:22.985889   24633 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0906 18:52:23.476790   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:23.476815   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:23.476826   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:23.476834   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:23.494440   24633 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0906 18:52:23.495067   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:23.977449   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:23.977471   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:23.977500   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:23.977507   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:24.083583   24633 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I0906 18:52:24.476646   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:24.476677   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:24.476688   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:24.476695   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:24.480633   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:24.976619   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:24.976639   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:24.976647   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:24.976652   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:24.979550   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:25.476556   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:25.476578   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:25.476586   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:25.476591   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:25.482148   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:25.977279   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:25.977300   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:25.977306   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:25.977310   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:25.981396   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:25.982519   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:26.476895   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:26.476918   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:26.476925   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:26.476929   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:26.480635   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:26.976709   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:26.976732   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:26.976740   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:26.976748   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:26.979883   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:27.477476   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:27.477499   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:27.477511   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:27.477516   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:27.483649   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:52:27.976824   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:27.976866   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:27.976878   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:27.976884   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:27.979837   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:28.476692   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:28.476712   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:28.476720   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:28.476724   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:28.479731   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:28.480725   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:28.977152   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:28.977174   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:28.977184   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:28.977188   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:28.980274   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:29.477277   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:29.477300   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:29.477310   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:29.477316   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:29.484774   24633 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 18:52:29.977232   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:29.977253   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:29.977261   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:29.977265   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:29.980398   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:30.476483   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:30.476507   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:30.476516   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:30.476520   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:30.479630   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:30.976384   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:30.976408   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:30.976417   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:30.976422   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:30.979366   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:30.980142   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:31.476436   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:31.476458   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:31.476466   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:31.476470   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:31.482330   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:31.976641   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:31.976671   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:31.976680   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:31.976687   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:31.979507   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:32.477379   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:32.477400   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:32.477408   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:32.477411   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:32.480314   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:32.976836   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:32.976871   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:32.976883   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:32.976890   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:32.988922   24633 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0906 18:52:32.989409   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:33.476761   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:33.476786   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:33.476797   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:33.476802   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:33.482012   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:33.976791   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:33.976810   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:33.976819   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:33.976822   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:33.979927   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:34.477153   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:34.477175   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:34.477182   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:34.477187   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:34.480048   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:34.977233   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:34.977254   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:34.977261   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:34.977265   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:34.980346   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:35.477347   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:35.477380   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.477387   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.477391   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.483375   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:35.484012   24633 node_ready.go:53] node "ha-313128-m02" has status "Ready":"False"
	I0906 18:52:35.976573   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:35.976595   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.976606   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.976611   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.979492   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:35.980085   24633 node_ready.go:49] node "ha-313128-m02" has status "Ready":"True"
	I0906 18:52:35.980104   24633 node_ready.go:38] duration metric: took 14.503855476s for node "ha-313128-m02" to be "Ready" ...
	I0906 18:52:35.980115   24633 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:52:35.980210   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:35.980221   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.980230   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.980235   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.984206   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:35.991932   24633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:35.992021   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-gccvh
	I0906 18:52:35.992033   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.992041   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.992047   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.995101   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:35.995664   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:35.995680   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.995695   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:35.995699   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.998302   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:35.998982   24633 pod_ready.go:93] pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:35.999000   24633 pod_ready.go:82] duration metric: took 7.045331ms for pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:35.999008   24633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:35.999056   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-gk28z
	I0906 18:52:35.999063   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:35.999070   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:35.999073   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.001831   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.002473   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:36.002488   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.002495   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.002500   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.005397   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.006213   24633 pod_ready.go:93] pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:36.006228   24633 pod_ready.go:82] duration metric: took 7.214096ms for pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:36.006238   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:36.006284   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128
	I0906 18:52:36.006296   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.006303   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.006307   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.008599   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.009377   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:36.009391   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.009398   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.009402   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.012269   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.012885   24633 pod_ready.go:93] pod "etcd-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:36.012904   24633 pod_ready.go:82] duration metric: took 6.659121ms for pod "etcd-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:36.012928   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:36.012985   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:36.012993   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.012999   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.013003   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.015599   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.016661   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:36.016675   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.016681   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.016686   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.019340   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:36.513636   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:36.513665   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.513675   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.513681   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.517307   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:36.518008   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:36.518023   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:36.518029   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:36.518034   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:36.520463   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:37.013212   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:37.013239   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:37.013248   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:37.013251   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:37.016567   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:37.017173   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:37.017190   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:37.017201   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:37.017205   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:37.019356   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:37.513989   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:37.514013   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:37.514021   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:37.514024   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:37.517392   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:37.518329   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:37.518347   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:37.518357   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:37.518365   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:37.520918   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:38.013675   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:38.013699   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:38.013707   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:38.013711   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:38.024772   24633 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0906 18:52:38.025369   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:38.025387   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:38.025397   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:38.025402   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:38.030416   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:38.030940   24633 pod_ready.go:103] pod "etcd-ha-313128-m02" in "kube-system" namespace has status "Ready":"False"
	I0906 18:52:38.513208   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:38.513230   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:38.513237   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:38.513245   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:38.516361   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:38.517012   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:38.517033   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:38.517041   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:38.517046   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:38.519644   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.014111   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:52:39.014137   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.014148   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.014155   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.018177   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:39.018872   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.018888   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.018895   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.018899   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.021072   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.021575   24633 pod_ready.go:93] pod "etcd-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.021594   24633 pod_ready.go:82] duration metric: took 3.008654084s for pod "etcd-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.021615   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.021669   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128
	I0906 18:52:39.021677   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.021684   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.021690   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.023922   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.024564   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:39.024578   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.024585   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.024590   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.026527   24633 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0906 18:52:39.027115   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.027137   24633 pod_ready.go:82] duration metric: took 5.508891ms for pod "kube-apiserver-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.027147   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.027203   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m02
	I0906 18:52:39.027213   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.027223   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.027231   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.029427   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.177388   24633 request.go:632] Waited for 147.307588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.177449   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.177456   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.177467   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.177486   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.180429   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.181364   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.181385   24633 pod_ready.go:82] duration metric: took 154.23065ms for pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.181397   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.376766   24633 request.go:632] Waited for 195.274368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128
	I0906 18:52:39.376882   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128
	I0906 18:52:39.376895   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.376909   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.376917   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.380203   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:39.577255   24633 request.go:632] Waited for 196.270673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:39.577340   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:39.577351   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.577362   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.577369   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.580260   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:52:39.580713   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.580730   24633 pod_ready.go:82] duration metric: took 399.322629ms for pod "kube-controller-manager-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.580744   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.776877   24633 request.go:632] Waited for 196.02646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m02
	I0906 18:52:39.776928   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m02
	I0906 18:52:39.776933   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.776940   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.776946   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.779995   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:39.977057   24633 request.go:632] Waited for 196.350023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.977112   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:39.977117   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:39.977124   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:39.977129   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:39.980556   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:39.981147   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:39.981167   24633 pod_ready.go:82] duration metric: took 400.414888ms for pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:39.981182   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h5xn7" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.177276   24633 request.go:632] Waited for 196.01678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5xn7
	I0906 18:52:40.177341   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5xn7
	I0906 18:52:40.177346   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.177353   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.177360   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.181270   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:40.377330   24633 request.go:632] Waited for 195.375056ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:40.377418   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:40.377425   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.377438   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.377445   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.380818   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:40.381384   24633 pod_ready.go:93] pod "kube-proxy-h5xn7" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:40.381401   24633 pod_ready.go:82] duration metric: took 400.208949ms for pod "kube-proxy-h5xn7" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.381410   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xjp6p" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.577561   24633 request.go:632] Waited for 196.067497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjp6p
	I0906 18:52:40.577630   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjp6p
	I0906 18:52:40.577639   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.577650   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.577661   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.581754   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:40.776916   24633 request.go:632] Waited for 194.18645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:40.776995   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:40.777003   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.777013   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.777022   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.781043   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:40.781927   24633 pod_ready.go:93] pod "kube-proxy-xjp6p" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:40.781945   24633 pod_ready.go:82] duration metric: took 400.528095ms for pod "kube-proxy-xjp6p" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.781954   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:40.977226   24633 request.go:632] Waited for 195.19516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128
	I0906 18:52:40.977304   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128
	I0906 18:52:40.977311   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:40.977322   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:40.977331   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:40.981411   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:41.176594   24633 request.go:632] Waited for 194.339343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:41.176659   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:52:41.176664   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.176675   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.176689   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.180585   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:41.181224   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:41.181246   24633 pod_ready.go:82] duration metric: took 399.28558ms for pod "kube-scheduler-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:41.181256   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:41.377364   24633 request.go:632] Waited for 196.025341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m02
	I0906 18:52:41.377418   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m02
	I0906 18:52:41.377424   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.377431   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.377434   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.381071   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:41.577294   24633 request.go:632] Waited for 195.374529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:41.577367   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:52:41.577376   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.577383   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.577392   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.581274   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:52:41.582162   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:52:41.582179   24633 pod_ready.go:82] duration metric: took 400.916754ms for pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:52:41.582189   24633 pod_ready.go:39] duration metric: took 5.602061956s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:52:41.582208   24633 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:52:41.582266   24633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:52:41.598573   24633 api_server.go:72] duration metric: took 20.361479931s to wait for apiserver process to appear ...
	I0906 18:52:41.598597   24633 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:52:41.598619   24633 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0906 18:52:41.604030   24633 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I0906 18:52:41.604099   24633 round_trippers.go:463] GET https://192.168.39.70:8443/version
	I0906 18:52:41.604108   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.604116   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.604122   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.605093   24633 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 18:52:41.605195   24633 api_server.go:141] control plane version: v1.31.0
	I0906 18:52:41.605213   24633 api_server.go:131] duration metric: took 6.609497ms to wait for apiserver health ...
	I0906 18:52:41.605223   24633 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:52:41.776652   24633 request.go:632] Waited for 171.293715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:41.776721   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:41.776728   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.776738   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.776743   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.782425   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:52:41.787311   24633 system_pods.go:59] 17 kube-system pods found
	I0906 18:52:41.787363   24633 system_pods.go:61] "coredns-6f6b679f8f-gccvh" [9b7c0e1a-3359-4f9f-826c-b75cbdfcd500] Running
	I0906 18:52:41.787373   24633 system_pods.go:61] "coredns-6f6b679f8f-gk28z" [ab595ef6-eaa8-44a0-bdad-ddd59c8d052d] Running
	I0906 18:52:41.787379   24633 system_pods.go:61] "etcd-ha-313128" [b4550b86-3359-44e6-a495-7db003a1bb95] Running
	I0906 18:52:41.787389   24633 system_pods.go:61] "etcd-ha-313128-m02" [d42fd6b2-4ecd-49e8-b5b2-5b29fabe2d1e] Running
	I0906 18:52:41.787394   24633 system_pods.go:61] "kindnet-h2trt" [90af3550-1fae-46bd-9329-f185fcdb23c6] Running
	I0906 18:52:41.787400   24633 system_pods.go:61] "kindnet-t65ls" [657498aa-b76e-4eb2-abbe-5d8a050fc415] Running
	I0906 18:52:41.787407   24633 system_pods.go:61] "kube-apiserver-ha-313128" [081ff647-e9c5-4cce-895a-e5e660db1acc] Running
	I0906 18:52:41.787413   24633 system_pods.go:61] "kube-apiserver-ha-313128-m02" [bed2808b-bef1-4fd4-a811-762a5ff46343] Running
	I0906 18:52:41.787419   24633 system_pods.go:61] "kube-controller-manager-ha-313128" [ba0308a5-06d8-468b-a3a1-e95a28a52dd7] Running
	I0906 18:52:41.787428   24633 system_pods.go:61] "kube-controller-manager-ha-313128-m02" [3f4032ce-c8b4-4a2c-9384-82d5d5ec0874] Running
	I0906 18:52:41.787433   24633 system_pods.go:61] "kube-proxy-h5xn7" [e45358c5-398e-4d33-9bd0-a4f28ce17ac9] Running
	I0906 18:52:41.787438   24633 system_pods.go:61] "kube-proxy-xjp6p" [0cbbf003-361c-441e-a2fe-18783999b020] Running
	I0906 18:52:41.787446   24633 system_pods.go:61] "kube-scheduler-ha-313128" [8580599b-125e-4a2f-9019-41b305c0f611] Running
	I0906 18:52:41.787454   24633 system_pods.go:61] "kube-scheduler-ha-313128-m02" [81cb0c5f-7e54-4e8c-b089-d6a4e2c9cbf0] Running
	I0906 18:52:41.787459   24633 system_pods.go:61] "kube-vip-ha-313128" [6e270949-38fe-475f-b902-ede9d2cb795f] Running
	I0906 18:52:41.787464   24633 system_pods.go:61] "kube-vip-ha-313128-m02" [949996a0-0ce0-4ce4-b9ec-86c8f35a4a96] Running
	I0906 18:52:41.787470   24633 system_pods.go:61] "storage-provisioner" [6c957eac-7904-4c39-b858-bfb7da32c75c] Running
	I0906 18:52:41.787479   24633 system_pods.go:74] duration metric: took 182.248108ms to wait for pod list to return data ...
	I0906 18:52:41.787490   24633 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:52:41.976938   24633 request.go:632] Waited for 189.371408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0906 18:52:41.977003   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0906 18:52:41.977009   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:41.977019   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:41.977026   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:41.981174   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:41.981432   24633 default_sa.go:45] found service account: "default"
	I0906 18:52:41.981453   24633 default_sa.go:55] duration metric: took 193.950991ms for default service account to be created ...
	I0906 18:52:41.981463   24633 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:52:42.176877   24633 request.go:632] Waited for 195.280058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:42.176942   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:52:42.176949   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:42.176959   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:42.176967   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:42.183456   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:52:42.187964   24633 system_pods.go:86] 17 kube-system pods found
	I0906 18:52:42.187995   24633 system_pods.go:89] "coredns-6f6b679f8f-gccvh" [9b7c0e1a-3359-4f9f-826c-b75cbdfcd500] Running
	I0906 18:52:42.188001   24633 system_pods.go:89] "coredns-6f6b679f8f-gk28z" [ab595ef6-eaa8-44a0-bdad-ddd59c8d052d] Running
	I0906 18:52:42.188005   24633 system_pods.go:89] "etcd-ha-313128" [b4550b86-3359-44e6-a495-7db003a1bb95] Running
	I0906 18:52:42.188009   24633 system_pods.go:89] "etcd-ha-313128-m02" [d42fd6b2-4ecd-49e8-b5b2-5b29fabe2d1e] Running
	I0906 18:52:42.188012   24633 system_pods.go:89] "kindnet-h2trt" [90af3550-1fae-46bd-9329-f185fcdb23c6] Running
	I0906 18:52:42.188016   24633 system_pods.go:89] "kindnet-t65ls" [657498aa-b76e-4eb2-abbe-5d8a050fc415] Running
	I0906 18:52:42.188020   24633 system_pods.go:89] "kube-apiserver-ha-313128" [081ff647-e9c5-4cce-895a-e5e660db1acc] Running
	I0906 18:52:42.188024   24633 system_pods.go:89] "kube-apiserver-ha-313128-m02" [bed2808b-bef1-4fd4-a811-762a5ff46343] Running
	I0906 18:52:42.188027   24633 system_pods.go:89] "kube-controller-manager-ha-313128" [ba0308a5-06d8-468b-a3a1-e95a28a52dd7] Running
	I0906 18:52:42.188030   24633 system_pods.go:89] "kube-controller-manager-ha-313128-m02" [3f4032ce-c8b4-4a2c-9384-82d5d5ec0874] Running
	I0906 18:52:42.188035   24633 system_pods.go:89] "kube-proxy-h5xn7" [e45358c5-398e-4d33-9bd0-a4f28ce17ac9] Running
	I0906 18:52:42.188038   24633 system_pods.go:89] "kube-proxy-xjp6p" [0cbbf003-361c-441e-a2fe-18783999b020] Running
	I0906 18:52:42.188040   24633 system_pods.go:89] "kube-scheduler-ha-313128" [8580599b-125e-4a2f-9019-41b305c0f611] Running
	I0906 18:52:42.188043   24633 system_pods.go:89] "kube-scheduler-ha-313128-m02" [81cb0c5f-7e54-4e8c-b089-d6a4e2c9cbf0] Running
	I0906 18:52:42.188046   24633 system_pods.go:89] "kube-vip-ha-313128" [6e270949-38fe-475f-b902-ede9d2cb795f] Running
	I0906 18:52:42.188049   24633 system_pods.go:89] "kube-vip-ha-313128-m02" [949996a0-0ce0-4ce4-b9ec-86c8f35a4a96] Running
	I0906 18:52:42.188052   24633 system_pods.go:89] "storage-provisioner" [6c957eac-7904-4c39-b858-bfb7da32c75c] Running
	I0906 18:52:42.188057   24633 system_pods.go:126] duration metric: took 206.585774ms to wait for k8s-apps to be running ...
	I0906 18:52:42.188065   24633 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:52:42.188104   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:52:42.202879   24633 system_svc.go:56] duration metric: took 14.807481ms WaitForService to wait for kubelet
	I0906 18:52:42.202905   24633 kubeadm.go:582] duration metric: took 20.965817345s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:52:42.202932   24633 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:52:42.377174   24633 request.go:632] Waited for 174.162112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes
	I0906 18:52:42.377231   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0906 18:52:42.377238   24633 round_trippers.go:469] Request Headers:
	I0906 18:52:42.377249   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:52:42.377257   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:52:42.381619   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:52:42.382336   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:52:42.382360   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:52:42.382386   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:52:42.382390   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:52:42.382394   24633 node_conditions.go:105] duration metric: took 179.458216ms to run NodePressure ...
	I0906 18:52:42.382408   24633 start.go:241] waiting for startup goroutines ...
	I0906 18:52:42.382439   24633 start.go:255] writing updated cluster config ...
	I0906 18:52:42.384374   24633 out.go:201] 
	I0906 18:52:42.385896   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:52:42.385977   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:52:42.387373   24633 out.go:177] * Starting "ha-313128-m03" control-plane node in "ha-313128" cluster
	I0906 18:52:42.388310   24633 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 18:52:42.388331   24633 cache.go:56] Caching tarball of preloaded images
	I0906 18:52:42.388442   24633 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 18:52:42.388454   24633 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 18:52:42.388533   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:52:42.388784   24633 start.go:360] acquireMachinesLock for ha-313128-m03: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 18:52:42.388822   24633 start.go:364] duration metric: took 22.001µs to acquireMachinesLock for "ha-313128-m03"
	I0906 18:52:42.388840   24633 start.go:93] Provisioning new machine with config: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:defau
lt APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provi
sioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:52:42.388949   24633 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0906 18:52:42.390247   24633 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 18:52:42.390362   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:52:42.390394   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:52:42.405591   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0906 18:52:42.406111   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:52:42.406615   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:52:42.406634   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:52:42.406956   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:52:42.407134   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetMachineName
	I0906 18:52:42.407289   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:52:42.407430   24633 start.go:159] libmachine.API.Create for "ha-313128" (driver="kvm2")
	I0906 18:52:42.407466   24633 client.go:168] LocalClient.Create starting
	I0906 18:52:42.407501   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 18:52:42.407543   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:52:42.407566   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:52:42.407635   24633 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 18:52:42.407671   24633 main.go:141] libmachine: Decoding PEM data...
	I0906 18:52:42.407686   24633 main.go:141] libmachine: Parsing certificate...
	I0906 18:52:42.407708   24633 main.go:141] libmachine: Running pre-create checks...
	I0906 18:52:42.407719   24633 main.go:141] libmachine: (ha-313128-m03) Calling .PreCreateCheck
	I0906 18:52:42.407960   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetConfigRaw
	I0906 18:52:42.408419   24633 main.go:141] libmachine: Creating machine...
	I0906 18:52:42.408431   24633 main.go:141] libmachine: (ha-313128-m03) Calling .Create
	I0906 18:52:42.408578   24633 main.go:141] libmachine: (ha-313128-m03) Creating KVM machine...
	I0906 18:52:42.409894   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found existing default KVM network
	I0906 18:52:42.410024   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found existing private KVM network mk-ha-313128
	I0906 18:52:42.410166   24633 main.go:141] libmachine: (ha-313128-m03) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03 ...
	I0906 18:52:42.410183   24633 main.go:141] libmachine: (ha-313128-m03) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:52:42.410295   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:42.410178   25383 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:52:42.410429   24633 main.go:141] libmachine: (ha-313128-m03) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 18:52:42.672936   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:42.672778   25383 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa...
	I0906 18:52:42.960450   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:42.960318   25383 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/ha-313128-m03.rawdisk...
	I0906 18:52:42.960474   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Writing magic tar header
	I0906 18:52:42.960485   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Writing SSH key tar header
	I0906 18:52:42.960498   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:42.960465   25383 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03 ...
	I0906 18:52:42.960595   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03
	I0906 18:52:42.960628   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 18:52:42.960638   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:52:42.960646   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03 (perms=drwx------)
	I0906 18:52:42.960653   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 18:52:42.960681   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 18:52:42.960704   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 18:52:42.960716   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 18:52:42.960729   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home/jenkins
	I0906 18:52:42.960744   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 18:52:42.960757   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 18:52:42.960767   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Checking permissions on dir: /home
	I0906 18:52:42.960787   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Skipping /home - not owner
	I0906 18:52:42.960805   24633 main.go:141] libmachine: (ha-313128-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 18:52:42.960816   24633 main.go:141] libmachine: (ha-313128-m03) Creating domain...
	I0906 18:52:42.961768   24633 main.go:141] libmachine: (ha-313128-m03) define libvirt domain using xml: 
	I0906 18:52:42.961791   24633 main.go:141] libmachine: (ha-313128-m03) <domain type='kvm'>
	I0906 18:52:42.961802   24633 main.go:141] libmachine: (ha-313128-m03)   <name>ha-313128-m03</name>
	I0906 18:52:42.961814   24633 main.go:141] libmachine: (ha-313128-m03)   <memory unit='MiB'>2200</memory>
	I0906 18:52:42.961823   24633 main.go:141] libmachine: (ha-313128-m03)   <vcpu>2</vcpu>
	I0906 18:52:42.961836   24633 main.go:141] libmachine: (ha-313128-m03)   <features>
	I0906 18:52:42.961849   24633 main.go:141] libmachine: (ha-313128-m03)     <acpi/>
	I0906 18:52:42.961859   24633 main.go:141] libmachine: (ha-313128-m03)     <apic/>
	I0906 18:52:42.961867   24633 main.go:141] libmachine: (ha-313128-m03)     <pae/>
	I0906 18:52:42.961877   24633 main.go:141] libmachine: (ha-313128-m03)     
	I0906 18:52:42.961891   24633 main.go:141] libmachine: (ha-313128-m03)   </features>
	I0906 18:52:42.961903   24633 main.go:141] libmachine: (ha-313128-m03)   <cpu mode='host-passthrough'>
	I0906 18:52:42.961913   24633 main.go:141] libmachine: (ha-313128-m03)   
	I0906 18:52:42.961920   24633 main.go:141] libmachine: (ha-313128-m03)   </cpu>
	I0906 18:52:42.961932   24633 main.go:141] libmachine: (ha-313128-m03)   <os>
	I0906 18:52:42.961940   24633 main.go:141] libmachine: (ha-313128-m03)     <type>hvm</type>
	I0906 18:52:42.961952   24633 main.go:141] libmachine: (ha-313128-m03)     <boot dev='cdrom'/>
	I0906 18:52:42.961961   24633 main.go:141] libmachine: (ha-313128-m03)     <boot dev='hd'/>
	I0906 18:52:42.961973   24633 main.go:141] libmachine: (ha-313128-m03)     <bootmenu enable='no'/>
	I0906 18:52:42.961982   24633 main.go:141] libmachine: (ha-313128-m03)   </os>
	I0906 18:52:42.961993   24633 main.go:141] libmachine: (ha-313128-m03)   <devices>
	I0906 18:52:42.962000   24633 main.go:141] libmachine: (ha-313128-m03)     <disk type='file' device='cdrom'>
	I0906 18:52:42.962016   24633 main.go:141] libmachine: (ha-313128-m03)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/boot2docker.iso'/>
	I0906 18:52:42.962028   24633 main.go:141] libmachine: (ha-313128-m03)       <target dev='hdc' bus='scsi'/>
	I0906 18:52:42.962037   24633 main.go:141] libmachine: (ha-313128-m03)       <readonly/>
	I0906 18:52:42.962047   24633 main.go:141] libmachine: (ha-313128-m03)     </disk>
	I0906 18:52:42.962059   24633 main.go:141] libmachine: (ha-313128-m03)     <disk type='file' device='disk'>
	I0906 18:52:42.962071   24633 main.go:141] libmachine: (ha-313128-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 18:52:42.962082   24633 main.go:141] libmachine: (ha-313128-m03)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/ha-313128-m03.rawdisk'/>
	I0906 18:52:42.962094   24633 main.go:141] libmachine: (ha-313128-m03)       <target dev='hda' bus='virtio'/>
	I0906 18:52:42.962105   24633 main.go:141] libmachine: (ha-313128-m03)     </disk>
	I0906 18:52:42.962116   24633 main.go:141] libmachine: (ha-313128-m03)     <interface type='network'>
	I0906 18:52:42.962132   24633 main.go:141] libmachine: (ha-313128-m03)       <source network='mk-ha-313128'/>
	I0906 18:52:42.962142   24633 main.go:141] libmachine: (ha-313128-m03)       <model type='virtio'/>
	I0906 18:52:42.962153   24633 main.go:141] libmachine: (ha-313128-m03)     </interface>
	I0906 18:52:42.962162   24633 main.go:141] libmachine: (ha-313128-m03)     <interface type='network'>
	I0906 18:52:42.962170   24633 main.go:141] libmachine: (ha-313128-m03)       <source network='default'/>
	I0906 18:52:42.962179   24633 main.go:141] libmachine: (ha-313128-m03)       <model type='virtio'/>
	I0906 18:52:42.962207   24633 main.go:141] libmachine: (ha-313128-m03)     </interface>
	I0906 18:52:42.962228   24633 main.go:141] libmachine: (ha-313128-m03)     <serial type='pty'>
	I0906 18:52:42.962241   24633 main.go:141] libmachine: (ha-313128-m03)       <target port='0'/>
	I0906 18:52:42.962254   24633 main.go:141] libmachine: (ha-313128-m03)     </serial>
	I0906 18:52:42.962284   24633 main.go:141] libmachine: (ha-313128-m03)     <console type='pty'>
	I0906 18:52:42.962312   24633 main.go:141] libmachine: (ha-313128-m03)       <target type='serial' port='0'/>
	I0906 18:52:42.962329   24633 main.go:141] libmachine: (ha-313128-m03)     </console>
	I0906 18:52:42.962340   24633 main.go:141] libmachine: (ha-313128-m03)     <rng model='virtio'>
	I0906 18:52:42.962351   24633 main.go:141] libmachine: (ha-313128-m03)       <backend model='random'>/dev/random</backend>
	I0906 18:52:42.962361   24633 main.go:141] libmachine: (ha-313128-m03)     </rng>
	I0906 18:52:42.962369   24633 main.go:141] libmachine: (ha-313128-m03)     
	I0906 18:52:42.962378   24633 main.go:141] libmachine: (ha-313128-m03)     
	I0906 18:52:42.962386   24633 main.go:141] libmachine: (ha-313128-m03)   </devices>
	I0906 18:52:42.962395   24633 main.go:141] libmachine: (ha-313128-m03) </domain>
	I0906 18:52:42.962405   24633 main.go:141] libmachine: (ha-313128-m03) 
	I0906 18:52:42.968960   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:6e:42:bb in network default
	I0906 18:52:42.969654   24633 main.go:141] libmachine: (ha-313128-m03) Ensuring networks are active...
	I0906 18:52:42.969681   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:42.970455   24633 main.go:141] libmachine: (ha-313128-m03) Ensuring network default is active
	I0906 18:52:42.970789   24633 main.go:141] libmachine: (ha-313128-m03) Ensuring network mk-ha-313128 is active
	I0906 18:52:42.971179   24633 main.go:141] libmachine: (ha-313128-m03) Getting domain xml...
	I0906 18:52:42.971917   24633 main.go:141] libmachine: (ha-313128-m03) Creating domain...
	I0906 18:52:44.206269   24633 main.go:141] libmachine: (ha-313128-m03) Waiting to get IP...
	I0906 18:52:44.207290   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:44.207825   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:44.207851   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:44.207812   25383 retry.go:31] will retry after 269.325849ms: waiting for machine to come up
	I0906 18:52:44.479059   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:44.479551   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:44.479580   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:44.479501   25383 retry.go:31] will retry after 259.571768ms: waiting for machine to come up
	I0906 18:52:44.741020   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:44.741529   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:44.741561   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:44.741486   25383 retry.go:31] will retry after 344.482395ms: waiting for machine to come up
	I0906 18:52:45.087978   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:45.088479   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:45.088508   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:45.088430   25383 retry.go:31] will retry after 469.573996ms: waiting for machine to come up
	I0906 18:52:45.559051   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:45.559525   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:45.559558   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:45.559474   25383 retry.go:31] will retry after 549.907681ms: waiting for machine to come up
	I0906 18:52:46.111222   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:46.111794   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:46.111824   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:46.111739   25383 retry.go:31] will retry after 897.894422ms: waiting for machine to come up
	I0906 18:52:47.011456   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:47.012300   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:47.012332   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:47.012240   25383 retry.go:31] will retry after 1.023510644s: waiting for machine to come up
	I0906 18:52:48.037255   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:48.037760   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:48.037788   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:48.037710   25383 retry.go:31] will retry after 1.096197794s: waiting for machine to come up
	I0906 18:52:49.135190   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:49.135772   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:49.135799   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:49.135721   25383 retry.go:31] will retry after 1.322554958s: waiting for machine to come up
	I0906 18:52:50.459897   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:50.460204   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:50.460224   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:50.460165   25383 retry.go:31] will retry after 1.619516894s: waiting for machine to come up
	I0906 18:52:52.081273   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:52.081758   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:52.081788   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:52.081702   25383 retry.go:31] will retry after 1.955341722s: waiting for machine to come up
	I0906 18:52:54.038968   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:54.039367   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:54.039421   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:54.039323   25383 retry.go:31] will retry after 2.472747912s: waiting for machine to come up
	I0906 18:52:56.513791   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:52:56.514187   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:52:56.514211   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:52:56.514144   25383 retry.go:31] will retry after 3.605132636s: waiting for machine to come up
	I0906 18:53:00.121842   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:00.122311   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find current IP address of domain ha-313128-m03 in network mk-ha-313128
	I0906 18:53:00.122332   24633 main.go:141] libmachine: (ha-313128-m03) DBG | I0906 18:53:00.122283   25383 retry.go:31] will retry after 5.401636488s: waiting for machine to come up
	I0906 18:53:05.527338   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.527877   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has current primary IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.527899   24633 main.go:141] libmachine: (ha-313128-m03) Found IP for machine: 192.168.39.172
	I0906 18:53:05.527911   24633 main.go:141] libmachine: (ha-313128-m03) Reserving static IP address...
	I0906 18:53:05.528327   24633 main.go:141] libmachine: (ha-313128-m03) DBG | unable to find host DHCP lease matching {name: "ha-313128-m03", mac: "52:54:00:90:b3:07", ip: "192.168.39.172"} in network mk-ha-313128
	I0906 18:53:05.601029   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Getting to WaitForSSH function...
	I0906 18:53:05.601061   24633 main.go:141] libmachine: (ha-313128-m03) Reserved static IP address: 192.168.39.172
	I0906 18:53:05.601079   24633 main.go:141] libmachine: (ha-313128-m03) Waiting for SSH to be available...
	I0906 18:53:05.603690   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.604143   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:05.604168   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.604367   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Using SSH client type: external
	I0906 18:53:05.604394   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa (-rw-------)
	I0906 18:53:05.604423   24633 main.go:141] libmachine: (ha-313128-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 18:53:05.604439   24633 main.go:141] libmachine: (ha-313128-m03) DBG | About to run SSH command:
	I0906 18:53:05.604451   24633 main.go:141] libmachine: (ha-313128-m03) DBG | exit 0
	I0906 18:53:05.729014   24633 main.go:141] libmachine: (ha-313128-m03) DBG | SSH cmd err, output: <nil>: 
	I0906 18:53:05.729277   24633 main.go:141] libmachine: (ha-313128-m03) KVM machine creation complete!
	I0906 18:53:05.729579   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetConfigRaw
	I0906 18:53:05.730093   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:05.730321   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:05.730492   24633 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 18:53:05.730505   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:53:05.731649   24633 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 18:53:05.731662   24633 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 18:53:05.731673   24633 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 18:53:05.731679   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:05.733873   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.734243   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:05.734274   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.734383   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:05.734581   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.734727   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.734833   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:05.734991   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:05.735215   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:05.735239   24633 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 18:53:05.840443   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:53:05.840475   24633 main.go:141] libmachine: Detecting the provisioner...
	I0906 18:53:05.840485   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:05.843177   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.843554   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:05.843583   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.843765   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:05.843954   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.844086   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.844184   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:05.844380   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:05.844548   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:05.844558   24633 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 18:53:05.949677   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 18:53:05.949774   24633 main.go:141] libmachine: found compatible host: buildroot
	I0906 18:53:05.949784   24633 main.go:141] libmachine: Provisioning with buildroot...
	I0906 18:53:05.949793   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetMachineName
	I0906 18:53:05.950038   24633 buildroot.go:166] provisioning hostname "ha-313128-m03"
	I0906 18:53:05.950059   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetMachineName
	I0906 18:53:05.950201   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:05.952795   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.953180   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:05.953212   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:05.953325   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:05.953498   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.953649   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:05.953814   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:05.953954   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:05.954108   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:05.954118   24633 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128-m03 && echo "ha-313128-m03" | sudo tee /etc/hostname
	I0906 18:53:06.072413   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128-m03
	
	I0906 18:53:06.072439   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.075110   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.075526   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.075554   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.075831   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.076026   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.076220   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.076328   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.076519   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:06.076679   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:06.076697   24633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:53:06.191781   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:53:06.191813   24633 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 18:53:06.191834   24633 buildroot.go:174] setting up certificates
	I0906 18:53:06.191848   24633 provision.go:84] configureAuth start
	I0906 18:53:06.191861   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetMachineName
	I0906 18:53:06.192106   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:53:06.194630   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.194897   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.194923   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.195124   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.197545   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.197899   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.197925   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.198058   24633 provision.go:143] copyHostCerts
	I0906 18:53:06.198091   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:53:06.198130   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 18:53:06.198142   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 18:53:06.198219   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 18:53:06.198312   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:53:06.198336   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 18:53:06.198344   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 18:53:06.198383   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 18:53:06.198448   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:53:06.198471   24633 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 18:53:06.198479   24633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 18:53:06.198517   24633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 18:53:06.198594   24633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128-m03 san=[127.0.0.1 192.168.39.172 ha-313128-m03 localhost minikube]
	I0906 18:53:06.364914   24633 provision.go:177] copyRemoteCerts
	I0906 18:53:06.364978   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:53:06.365007   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.367341   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.367666   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.367692   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.367850   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.368022   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.368164   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.368284   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:53:06.451510   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 18:53:06.451589   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:53:06.478096   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 18:53:06.478160   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:53:06.503688   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 18:53:06.503768   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 18:53:06.528822   24633 provision.go:87] duration metric: took 336.96118ms to configureAuth
	I0906 18:53:06.528850   24633 buildroot.go:189] setting minikube options for container-runtime
	I0906 18:53:06.529126   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:53:06.529201   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.532385   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.532849   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.532900   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.533143   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.533361   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.533530   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.533673   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.533855   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:06.534077   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:06.534093   24633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 18:53:06.756664   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 18:53:06.756686   24633 main.go:141] libmachine: Checking connection to Docker...
	I0906 18:53:06.756694   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetURL
	I0906 18:53:06.757884   24633 main.go:141] libmachine: (ha-313128-m03) DBG | Using libvirt version 6000000
	I0906 18:53:06.760136   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.760546   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.760584   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.760740   24633 main.go:141] libmachine: Docker is up and running!
	I0906 18:53:06.760758   24633 main.go:141] libmachine: Reticulating splines...
	I0906 18:53:06.760765   24633 client.go:171] duration metric: took 24.353288857s to LocalClient.Create
	I0906 18:53:06.760784   24633 start.go:167] duration metric: took 24.353355904s to libmachine.API.Create "ha-313128"
	I0906 18:53:06.760793   24633 start.go:293] postStartSetup for "ha-313128-m03" (driver="kvm2")
	I0906 18:53:06.760803   24633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:53:06.760819   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:06.761062   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:53:06.761085   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.763644   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.763985   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.764012   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.764192   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.764397   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.764578   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.764735   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:53:06.847844   24633 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:53:06.852718   24633 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 18:53:06.852747   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 18:53:06.852822   24633 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 18:53:06.852936   24633 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 18:53:06.852952   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 18:53:06.853048   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 18:53:06.863393   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:53:06.888736   24633 start.go:296] duration metric: took 127.929369ms for postStartSetup
	I0906 18:53:06.888797   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetConfigRaw
	I0906 18:53:06.889451   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:53:06.892071   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.892487   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.892514   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.892825   24633 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:53:06.893247   24633 start.go:128] duration metric: took 24.504277174s to createHost
	I0906 18:53:06.893274   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:06.895395   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.895728   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:06.895757   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:06.895895   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:06.896083   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.896245   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:06.896375   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:06.896551   24633 main.go:141] libmachine: Using SSH client type: native
	I0906 18:53:06.896748   24633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0906 18:53:06.896761   24633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 18:53:07.001946   24633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725648786.975718563
	
	I0906 18:53:07.001969   24633 fix.go:216] guest clock: 1725648786.975718563
	I0906 18:53:07.001979   24633 fix.go:229] Guest: 2024-09-06 18:53:06.975718563 +0000 UTC Remote: 2024-09-06 18:53:06.893261539 +0000 UTC m=+144.685491150 (delta=82.457024ms)
	I0906 18:53:07.002009   24633 fix.go:200] guest clock delta is within tolerance: 82.457024ms
	I0906 18:53:07.002019   24633 start.go:83] releasing machines lock for "ha-313128-m03", held for 24.613186073s
	I0906 18:53:07.002047   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:07.002365   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:53:07.005201   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.005588   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:07.005613   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.008039   24633 out.go:177] * Found network options:
	I0906 18:53:07.009756   24633 out.go:177]   - NO_PROXY=192.168.39.70,192.168.39.32
	W0906 18:53:07.011035   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 18:53:07.011064   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 18:53:07.011082   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:07.011707   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:07.011907   24633 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:53:07.012004   24633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:53:07.012042   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	W0906 18:53:07.012101   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	W0906 18:53:07.012135   24633 proxy.go:119] fail to check proxy env: Error ip not in block
	I0906 18:53:07.012207   24633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 18:53:07.012227   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:53:07.014979   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.015007   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.015430   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:07.015460   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.015493   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:07.015509   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:07.015580   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:07.015776   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:53:07.015786   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:07.015963   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:53:07.015965   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:07.016126   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:53:07.016150   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:53:07.016272   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:53:07.248808   24633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 18:53:07.255444   24633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 18:53:07.255518   24633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:53:07.272358   24633 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 18:53:07.272381   24633 start.go:495] detecting cgroup driver to use...
	I0906 18:53:07.272447   24633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 18:53:07.290268   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 18:53:07.305250   24633 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:53:07.305302   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:53:07.320102   24633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:53:07.334587   24633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:53:07.451557   24633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:53:07.626596   24633 docker.go:233] disabling docker service ...
	I0906 18:53:07.626675   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:53:07.641115   24633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:53:07.654454   24633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:53:07.779657   24633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:53:07.902355   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:53:07.917720   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:53:07.938374   24633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 18:53:07.938439   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:07.952230   24633 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 18:53:07.952305   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:07.963927   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:07.974677   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:07.985298   24633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:53:07.996651   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:08.008163   24633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:08.026528   24633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 18:53:08.038498   24633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:53:08.048748   24633 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 18:53:08.048803   24633 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 18:53:08.063095   24633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:53:08.073574   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:53:08.193677   24633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 18:53:08.285533   24633 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 18:53:08.285606   24633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 18:53:08.290429   24633 start.go:563] Will wait 60s for crictl version
	I0906 18:53:08.290477   24633 ssh_runner.go:195] Run: which crictl
	I0906 18:53:08.294356   24633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:53:08.336784   24633 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 18:53:08.336886   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:53:08.367015   24633 ssh_runner.go:195] Run: crio --version
	I0906 18:53:08.398051   24633 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 18:53:08.399358   24633 out.go:177]   - env NO_PROXY=192.168.39.70
	I0906 18:53:08.400519   24633 out.go:177]   - env NO_PROXY=192.168.39.70,192.168.39.32
	I0906 18:53:08.401625   24633 main.go:141] libmachine: (ha-313128-m03) Calling .GetIP
	I0906 18:53:08.404166   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:08.404535   24633 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:53:08.404568   24633 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:53:08.404796   24633 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 18:53:08.409362   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:53:08.422176   24633 mustload.go:65] Loading cluster: ha-313128
	I0906 18:53:08.422434   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:53:08.422904   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:53:08.422950   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:08.438041   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43061
	I0906 18:53:08.438487   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:08.438895   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:53:08.438918   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:08.439253   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:08.439447   24633 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 18:53:08.441079   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:53:08.441376   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:53:08.441417   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:08.456403   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33491
	I0906 18:53:08.456802   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:08.457251   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:53:08.457276   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:08.457570   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:08.457784   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:53:08.457940   24633 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.172
	I0906 18:53:08.457952   24633 certs.go:194] generating shared ca certs ...
	I0906 18:53:08.457970   24633 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:53:08.458109   24633 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 18:53:08.458167   24633 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 18:53:08.458178   24633 certs.go:256] generating profile certs ...
	I0906 18:53:08.458252   24633 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 18:53:08.458277   24633 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.694b0ac9
	I0906 18:53:08.458291   24633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.694b0ac9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.32 192.168.39.172 192.168.39.254]
	I0906 18:53:08.593889   24633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.694b0ac9 ...
	I0906 18:53:08.593920   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.694b0ac9: {Name:mk6c999646e794fc171d59c7a727ee1ebb048cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:53:08.594082   24633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.694b0ac9 ...
	I0906 18:53:08.594098   24633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.694b0ac9: {Name:mkf8af5f6f963663c0d89938e375b153be71e632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:53:08.594168   24633 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.694b0ac9 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 18:53:08.594366   24633 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.694b0ac9 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 18:53:08.594542   24633 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 18:53:08.594560   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 18:53:08.594573   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 18:53:08.594583   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 18:53:08.594594   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 18:53:08.594604   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 18:53:08.594618   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 18:53:08.594630   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 18:53:08.594642   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 18:53:08.594701   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 18:53:08.594728   24633 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 18:53:08.594738   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:53:08.594761   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:53:08.594782   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:53:08.594803   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 18:53:08.594843   24633 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 18:53:08.594870   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 18:53:08.594884   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:53:08.594897   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 18:53:08.594924   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:53:08.597892   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:53:08.598284   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:53:08.598315   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:53:08.598485   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:53:08.598669   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:53:08.598826   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:53:08.598966   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:53:08.677160   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0906 18:53:08.685381   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0906 18:53:08.698851   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0906 18:53:08.703117   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0906 18:53:08.714724   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0906 18:53:08.718905   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0906 18:53:08.730196   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0906 18:53:08.735506   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0906 18:53:08.747184   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0906 18:53:08.751582   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0906 18:53:08.766710   24633 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0906 18:53:08.771975   24633 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0906 18:53:08.784212   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:53:08.810871   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 18:53:08.835164   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:53:08.861587   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:53:08.890093   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0906 18:53:08.914755   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 18:53:08.940093   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:53:08.965346   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 18:53:08.990696   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 18:53:09.014557   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:53:09.038432   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 18:53:09.067245   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0906 18:53:09.085969   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0906 18:53:09.103587   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0906 18:53:09.120199   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0906 18:53:09.136565   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0906 18:53:09.152936   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0906 18:53:09.169676   24633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0906 18:53:09.187770   24633 ssh_runner.go:195] Run: openssl version
	I0906 18:53:09.194813   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 18:53:09.206893   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 18:53:09.211625   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 18:53:09.211675   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 18:53:09.217877   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 18:53:09.230586   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 18:53:09.242731   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 18:53:09.248136   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 18:53:09.248196   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 18:53:09.253804   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 18:53:09.264699   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:53:09.276149   24633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:53:09.280764   24633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:53:09.280826   24633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:53:09.287180   24633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:53:09.298443   24633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:53:09.302501   24633 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:53:09.302555   24633 kubeadm.go:934] updating node {m03 192.168.39.172 8443 v1.31.0 crio true true} ...
	I0906 18:53:09.302674   24633 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:53:09.302708   24633 kube-vip.go:115] generating kube-vip config ...
	I0906 18:53:09.302752   24633 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 18:53:09.320671   24633 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 18:53:09.320729   24633 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 18:53:09.320806   24633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:53:09.330370   24633 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0906 18:53:09.330416   24633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0906 18:53:09.341121   24633 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0906 18:53:09.341155   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0906 18:53:09.341157   24633 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0906 18:53:09.341176   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0906 18:53:09.341125   24633 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0906 18:53:09.341248   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0906 18:53:09.341258   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0906 18:53:09.341248   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:53:09.351667   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0906 18:53:09.351709   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0906 18:53:09.351753   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0906 18:53:09.351790   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0906 18:53:09.369263   24633 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0906 18:53:09.369381   24633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0906 18:53:09.466522   24633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0906 18:53:09.466572   24633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0906 18:53:10.264236   24633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0906 18:53:10.274564   24633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0906 18:53:10.292286   24633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:53:10.310162   24633 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0906 18:53:10.326710   24633 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 18:53:10.331644   24633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:53:10.344416   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:53:10.466981   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:53:10.485074   24633 host.go:66] Checking if "ha-313128" exists ...
	I0906 18:53:10.485589   24633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:53:10.485644   24633 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:53:10.502221   24633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33133
	I0906 18:53:10.502686   24633 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:53:10.503245   24633 main.go:141] libmachine: Using API Version  1
	I0906 18:53:10.503273   24633 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:53:10.503719   24633 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:53:10.503926   24633 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 18:53:10.504110   24633 start.go:317] joinCluster: &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.16
8.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fal
se kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:53:10.504240   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0906 18:53:10.504262   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 18:53:10.507441   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:53:10.507895   24633 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 18:53:10.507926   24633 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 18:53:10.508063   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 18:53:10.508262   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 18:53:10.508390   24633 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 18:53:10.508527   24633 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 18:53:10.660452   24633 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:53:10.660499   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yxaleg.cfeauffnnk9lcyg0 --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-313128-m03 --control-plane --apiserver-advertise-address=192.168.39.172 --apiserver-bind-port=8443"
	I0906 18:53:41.526231   24633 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yxaleg.cfeauffnnk9lcyg0 --discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-313128-m03 --control-plane --apiserver-advertise-address=192.168.39.172 --apiserver-bind-port=8443": (30.86570375s)
	I0906 18:53:41.526267   24633 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0906 18:53:42.178453   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-313128-m03 minikube.k8s.io/updated_at=2024_09_06T18_53_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=ha-313128 minikube.k8s.io/primary=false
	I0906 18:53:42.313143   24633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-313128-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0906 18:53:42.438891   24633 start.go:319] duration metric: took 31.934778083s to joinCluster
	I0906 18:53:42.438982   24633 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 18:53:42.439381   24633 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:53:42.440154   24633 out.go:177] * Verifying Kubernetes components...
	I0906 18:53:42.441171   24633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:53:42.775930   24633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:53:42.811766   24633 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:53:42.812301   24633 kapi.go:59] client config for ha-313128: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.crt", KeyFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key", CAFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0906 18:53:42.812480   24633 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.70:8443
	I0906 18:53:42.812776   24633 node_ready.go:35] waiting up to 6m0s for node "ha-313128-m03" to be "Ready" ...
	I0906 18:53:42.812881   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:42.812892   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:42.812903   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:42.812912   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:42.816347   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:43.313894   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:43.313920   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:43.313931   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:43.313940   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:43.317699   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:43.813704   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:43.813726   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:43.813734   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:43.813738   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:43.817359   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:44.313031   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:44.313052   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:44.313060   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:44.313064   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:44.316285   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:44.813055   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:44.813080   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:44.813089   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:44.813094   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:44.816444   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:44.817198   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:45.312995   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:45.313037   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:45.313047   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:45.313052   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:45.316807   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:45.813870   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:45.813898   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:45.813909   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:45.813914   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:45.817869   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:46.313985   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:46.314011   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:46.314024   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:46.314032   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:46.317099   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:46.813053   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:46.813079   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:46.813092   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:46.813099   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:46.822959   24633 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0906 18:53:46.823418   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:47.313752   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:47.313772   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:47.313780   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:47.313784   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:47.316959   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:47.813930   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:47.813953   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:47.813965   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:47.813972   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:47.817642   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:48.313980   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:48.314004   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:48.314012   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:48.314015   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:48.317443   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:48.812994   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:48.813026   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:48.813035   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:48.813039   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:48.816141   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:49.313677   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:49.313701   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:49.313711   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:49.313717   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:49.318967   24633 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0906 18:53:49.319464   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:49.813866   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:49.813889   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:49.813897   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:49.813901   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:49.816921   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:50.313853   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:50.313875   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:50.313882   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:50.313887   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:50.317260   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:50.813959   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:50.813998   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:50.814007   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:50.814011   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:50.817199   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:51.313011   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:51.313039   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:51.313047   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:51.313052   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:51.316841   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:51.814002   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:51.814028   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:51.814038   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:51.814044   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:51.817528   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:51.818454   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:52.313022   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:52.313046   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:52.313058   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:52.313064   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:52.316557   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:52.813552   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:52.813578   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:52.813590   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:52.813596   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:52.816773   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:53.313033   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:53.313056   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:53.313064   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:53.313067   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:53.316654   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:53.813671   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:53.813691   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:53.813699   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:53.813703   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:53.816712   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:54.313933   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:54.313956   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:54.313964   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:54.313968   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:54.317619   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:54.318574   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:54.813972   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:54.813994   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:54.814002   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:54.814012   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:54.817704   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:55.313028   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:55.313051   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:55.313059   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:55.313065   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:55.316670   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:55.813769   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:55.813792   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:55.813800   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:55.813804   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:55.817218   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:56.313025   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:56.313054   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:56.313064   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:56.313068   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:56.316489   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:56.813331   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:56.813353   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:56.813363   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:56.813368   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:56.816700   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:56.817403   24633 node_ready.go:53] node "ha-313128-m03" has status "Ready":"False"
	I0906 18:53:57.313949   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:57.313973   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.313983   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.313989   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.327439   24633 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0906 18:53:57.328358   24633 node_ready.go:49] node "ha-313128-m03" has status "Ready":"True"
	I0906 18:53:57.328378   24633 node_ready.go:38] duration metric: took 14.515582635s for node "ha-313128-m03" to be "Ready" ...
	I0906 18:53:57.328389   24633 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:53:57.328477   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:53:57.328488   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.328498   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.328503   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.335604   24633 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0906 18:53:57.342737   24633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.342809   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-gccvh
	I0906 18:53:57.342815   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.342825   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.342831   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.345862   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:57.346611   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:57.346627   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.346634   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.346639   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.349258   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.349714   24633 pod_ready.go:93] pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:57.349733   24633 pod_ready.go:82] duration metric: took 6.974302ms for pod "coredns-6f6b679f8f-gccvh" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.349744   24633 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.349805   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-gk28z
	I0906 18:53:57.349815   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.349825   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.349832   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.352547   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.353211   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:57.353233   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.353244   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.353251   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.355705   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.356421   24633 pod_ready.go:93] pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:57.356441   24633 pod_ready.go:82] duration metric: took 6.689336ms for pod "coredns-6f6b679f8f-gk28z" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.356453   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.356510   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128
	I0906 18:53:57.356521   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.356533   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.356542   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.359039   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.359573   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:57.359590   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.359599   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.359603   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.362106   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.362720   24633 pod_ready.go:93] pod "etcd-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:57.362737   24633 pod_ready.go:82] duration metric: took 6.276937ms for pod "etcd-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.362747   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.362796   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m02
	I0906 18:53:57.362806   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.362815   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.362826   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.369660   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:53:57.370162   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:53:57.370177   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.370186   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.370191   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.372802   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:57.373457   24633 pod_ready.go:93] pod "etcd-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:57.373480   24633 pod_ready.go:82] duration metric: took 10.722895ms for pod "etcd-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.373492   24633 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:57.514895   24633 request.go:632] Waited for 141.339391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m03
	I0906 18:53:57.514968   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m03
	I0906 18:53:57.514976   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.514985   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.514993   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.518559   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:57.714441   24633 request.go:632] Waited for 195.349087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:57.714504   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:57.714512   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.714522   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.714527   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.717936   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:57.914384   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m03
	I0906 18:53:57.914409   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:57.914419   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:57.914426   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:57.918369   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.114393   24633 request.go:632] Waited for 195.358749ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:58.114452   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:58.114457   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.114464   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.114469   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.117810   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.374575   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128-m03
	I0906 18:53:58.374600   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.374609   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.374616   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.378690   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:53:58.514368   24633 request.go:632] Waited for 134.771045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:58.514438   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:58.514449   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.514459   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.514471   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.518091   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.518697   24633 pod_ready.go:93] pod "etcd-ha-313128-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:58.518712   24633 pod_ready.go:82] duration metric: took 1.145213644s for pod "etcd-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:58.518732   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:58.714005   24633 request.go:632] Waited for 195.202478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128
	I0906 18:53:58.714095   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128
	I0906 18:53:58.714103   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.714117   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.714129   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.717314   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.914271   24633 request.go:632] Waited for 196.153535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:58.914335   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:53:58.914344   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:58.914358   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:58.914366   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:58.917837   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:58.918643   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:58.918660   24633 pod_ready.go:82] duration metric: took 399.921214ms for pod "kube-apiserver-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:58.918669   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.114766   24633 request.go:632] Waited for 196.017542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m02
	I0906 18:53:59.114831   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m02
	I0906 18:53:59.114839   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.114852   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.114860   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.118605   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:59.314628   24633 request.go:632] Waited for 195.357248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:53:59.314681   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:53:59.314687   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.314696   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.314708   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.317819   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:59.318398   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:59.318414   24633 pod_ready.go:82] duration metric: took 399.739323ms for pod "kube-apiserver-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.318426   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.514622   24633 request.go:632] Waited for 196.133616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m03
	I0906 18:53:59.514701   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128-m03
	I0906 18:53:59.514707   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.514715   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.514719   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.518088   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:53:59.714940   24633 request.go:632] Waited for 196.072496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:59.714999   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:53:59.715005   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.715012   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.715016   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.717813   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:53:59.718565   24633 pod_ready.go:93] pod "kube-apiserver-ha-313128-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 18:53:59.718584   24633 pod_ready.go:82] duration metric: took 400.146943ms for pod "kube-apiserver-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.718598   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:53:59.914728   24633 request.go:632] Waited for 196.064081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128
	I0906 18:53:59.914800   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128
	I0906 18:53:59.914805   24633 round_trippers.go:469] Request Headers:
	I0906 18:53:59.914813   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:53:59.914821   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:53:59.918524   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.114637   24633 request.go:632] Waited for 195.373041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:00.114703   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:00.114710   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.114721   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.114729   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.118047   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.118811   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:00.118830   24633 pod_ready.go:82] duration metric: took 400.22454ms for pod "kube-controller-manager-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.118840   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.314834   24633 request.go:632] Waited for 195.917876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m02
	I0906 18:54:00.314899   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m02
	I0906 18:54:00.314906   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.314916   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.314926   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.318082   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.514111   24633 request.go:632] Waited for 195.120873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:00.514172   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:00.514179   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.514197   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.514205   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.517491   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.518099   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:00.518116   24633 pod_ready.go:82] duration metric: took 399.268736ms for pod "kube-controller-manager-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.518126   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.714447   24633 request.go:632] Waited for 196.253088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m03
	I0906 18:54:00.714544   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-313128-m03
	I0906 18:54:00.714551   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.714565   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.714575   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.718114   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.914418   24633 request.go:632] Waited for 195.377075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:00.914483   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:00.914491   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:00.914500   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:00.914509   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:00.917901   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:00.918649   24633 pod_ready.go:93] pod "kube-controller-manager-ha-313128-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:00.918671   24633 pod_ready.go:82] duration metric: took 400.537166ms for pod "kube-controller-manager-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:00.918682   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gfjr7" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.114917   24633 request.go:632] Waited for 196.159274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gfjr7
	I0906 18:54:01.114989   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gfjr7
	I0906 18:54:01.114996   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.115007   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.115016   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.118521   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:01.314588   24633 request.go:632] Waited for 195.358728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:01.314668   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:01.314675   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.314682   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.314686   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.318029   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:01.318673   24633 pod_ready.go:93] pod "kube-proxy-gfjr7" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:01.318691   24633 pod_ready.go:82] duration metric: took 400.003139ms for pod "kube-proxy-gfjr7" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.318701   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h5xn7" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.514801   24633 request.go:632] Waited for 196.042574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5xn7
	I0906 18:54:01.514855   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5xn7
	I0906 18:54:01.514866   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.514885   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.514891   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.518511   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:01.714537   24633 request.go:632] Waited for 195.332709ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:01.714602   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:01.714609   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.714620   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.714626   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.717898   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:01.718416   24633 pod_ready.go:93] pod "kube-proxy-h5xn7" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:01.718434   24633 pod_ready.go:82] duration metric: took 399.727356ms for pod "kube-proxy-h5xn7" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.718446   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xjp6p" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:01.914543   24633 request.go:632] Waited for 196.020945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjp6p
	I0906 18:54:01.914611   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjp6p
	I0906 18:54:01.914617   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:01.914624   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:01.914629   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:01.918372   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.114514   24633 request.go:632] Waited for 195.35283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:02.114587   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:02.114593   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.114600   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.114604   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.118050   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.118591   24633 pod_ready.go:93] pod "kube-proxy-xjp6p" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:02.118610   24633 pod_ready.go:82] duration metric: took 400.155611ms for pod "kube-proxy-xjp6p" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.118620   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.313968   24633 request.go:632] Waited for 195.283751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128
	I0906 18:54:02.314056   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128
	I0906 18:54:02.314065   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.314077   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.314091   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.317646   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.514144   24633 request.go:632] Waited for 195.801776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:02.514208   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128
	I0906 18:54:02.514214   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.514221   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.514226   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.517249   24633 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0906 18:54:02.517938   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:02.517955   24633 pod_ready.go:82] duration metric: took 399.328108ms for pod "kube-scheduler-ha-313128" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.517964   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.714164   24633 request.go:632] Waited for 196.128114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m02
	I0906 18:54:02.714243   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m02
	I0906 18:54:02.714253   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.714264   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.714274   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.717794   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.914697   24633 request.go:632] Waited for 196.291724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:02.914751   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02
	I0906 18:54:02.914759   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:02.914768   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:02.914779   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:02.918615   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:02.919354   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:02.919370   24633 pod_ready.go:82] duration metric: took 401.399291ms for pod "kube-scheduler-ha-313128-m02" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:02.919381   24633 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:03.114558   24633 request.go:632] Waited for 195.096741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m03
	I0906 18:54:03.114639   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-313128-m03
	I0906 18:54:03.114653   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.114665   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.114676   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.117825   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:03.314865   24633 request.go:632] Waited for 196.35431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:03.314945   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes/ha-313128-m03
	I0906 18:54:03.314951   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.314958   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.314962   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.318254   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:03.318931   24633 pod_ready.go:93] pod "kube-scheduler-ha-313128-m03" in "kube-system" namespace has status "Ready":"True"
	I0906 18:54:03.318948   24633 pod_ready.go:82] duration metric: took 399.560197ms for pod "kube-scheduler-ha-313128-m03" in "kube-system" namespace to be "Ready" ...
	I0906 18:54:03.318958   24633 pod_ready.go:39] duration metric: took 5.990557854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:54:03.318972   24633 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:54:03.319025   24633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:54:03.334503   24633 api_server.go:72] duration metric: took 20.895485689s to wait for apiserver process to appear ...
	I0906 18:54:03.334523   24633 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:54:03.334540   24633 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I0906 18:54:03.340935   24633 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I0906 18:54:03.341012   24633 round_trippers.go:463] GET https://192.168.39.70:8443/version
	I0906 18:54:03.341023   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.341034   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.341043   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.341830   24633 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0906 18:54:03.341912   24633 api_server.go:141] control plane version: v1.31.0
	I0906 18:54:03.341930   24633 api_server.go:131] duration metric: took 7.401121ms to wait for apiserver health ...
	I0906 18:54:03.341940   24633 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:54:03.514101   24633 request.go:632] Waited for 172.091152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:54:03.514158   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:54:03.514164   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.514172   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.514175   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.520237   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:54:03.526898   24633 system_pods.go:59] 24 kube-system pods found
	I0906 18:54:03.526925   24633 system_pods.go:61] "coredns-6f6b679f8f-gccvh" [9b7c0e1a-3359-4f9f-826c-b75cbdfcd500] Running
	I0906 18:54:03.526931   24633 system_pods.go:61] "coredns-6f6b679f8f-gk28z" [ab595ef6-eaa8-44a0-bdad-ddd59c8d052d] Running
	I0906 18:54:03.526935   24633 system_pods.go:61] "etcd-ha-313128" [b4550b86-3359-44e6-a495-7db003a1bb95] Running
	I0906 18:54:03.526939   24633 system_pods.go:61] "etcd-ha-313128-m02" [d42fd6b2-4ecd-49e8-b5b2-5b29fabe2d1e] Running
	I0906 18:54:03.526943   24633 system_pods.go:61] "etcd-ha-313128-m03" [389e0f5d-34fa-40ff-bba5-079485a68d04] Running
	I0906 18:54:03.526946   24633 system_pods.go:61] "kindnet-h2trt" [90af3550-1fae-46bd-9329-f185fcdb23c6] Running
	I0906 18:54:03.526949   24633 system_pods.go:61] "kindnet-jl257" [0c8c46d5-9a1f-40c6-823e-3e0afca658c5] Running
	I0906 18:54:03.526953   24633 system_pods.go:61] "kindnet-t65ls" [657498aa-b76e-4eb2-abbe-5d8a050fc415] Running
	I0906 18:54:03.526958   24633 system_pods.go:61] "kube-apiserver-ha-313128" [081ff647-e9c5-4cce-895a-e5e660db1acc] Running
	I0906 18:54:03.526960   24633 system_pods.go:61] "kube-apiserver-ha-313128-m02" [bed2808b-bef1-4fd4-a811-762a5ff46343] Running
	I0906 18:54:03.526966   24633 system_pods.go:61] "kube-apiserver-ha-313128-m03" [df855b79-c920-42c5-a8c2-d4d97c4d0fed] Running
	I0906 18:54:03.526970   24633 system_pods.go:61] "kube-controller-manager-ha-313128" [ba0308a5-06d8-468b-a3a1-e95a28a52dd7] Running
	I0906 18:54:03.526975   24633 system_pods.go:61] "kube-controller-manager-ha-313128-m02" [3f4032ce-c8b4-4a2c-9384-82d5d5ec0874] Running
	I0906 18:54:03.526979   24633 system_pods.go:61] "kube-controller-manager-ha-313128-m03" [4f975f72-075c-43dd-b104-bdf5172f45ed] Running
	I0906 18:54:03.526985   24633 system_pods.go:61] "kube-proxy-gfjr7" [2fb5a899-48c8-4e96-ac8e-b77570ecaf26] Running
	I0906 18:54:03.526989   24633 system_pods.go:61] "kube-proxy-h5xn7" [e45358c5-398e-4d33-9bd0-a4f28ce17ac9] Running
	I0906 18:54:03.526994   24633 system_pods.go:61] "kube-proxy-xjp6p" [0cbbf003-361c-441e-a2fe-18783999b020] Running
	I0906 18:54:03.526998   24633 system_pods.go:61] "kube-scheduler-ha-313128" [8580599b-125e-4a2f-9019-41b305c0f611] Running
	I0906 18:54:03.527001   24633 system_pods.go:61] "kube-scheduler-ha-313128-m02" [81cb0c5f-7e54-4e8c-b089-d6a4e2c9cbf0] Running
	I0906 18:54:03.527005   24633 system_pods.go:61] "kube-scheduler-ha-313128-m03" [a49687b2-124f-49c7-abfe-5e401ebabc1f] Running
	I0906 18:54:03.527009   24633 system_pods.go:61] "kube-vip-ha-313128" [6e270949-38fe-475f-b902-ede9d2cb795f] Running
	I0906 18:54:03.527012   24633 system_pods.go:61] "kube-vip-ha-313128-m02" [949996a0-0ce0-4ce4-b9ec-86c8f35a4a96] Running
	I0906 18:54:03.527017   24633 system_pods.go:61] "kube-vip-ha-313128-m03" [867dc2d0-034e-45d9-b3c2-72179e58597e] Running
	I0906 18:54:03.527021   24633 system_pods.go:61] "storage-provisioner" [6c957eac-7904-4c39-b858-bfb7da32c75c] Running
	I0906 18:54:03.527029   24633 system_pods.go:74] duration metric: took 185.079358ms to wait for pod list to return data ...
	I0906 18:54:03.527037   24633 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:54:03.714476   24633 request.go:632] Waited for 187.354456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0906 18:54:03.714532   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/default/serviceaccounts
	I0906 18:54:03.714538   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.714552   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.714560   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.719117   24633 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0906 18:54:03.719260   24633 default_sa.go:45] found service account: "default"
	I0906 18:54:03.719283   24633 default_sa.go:55] duration metric: took 192.237231ms for default service account to be created ...
	I0906 18:54:03.719295   24633 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:54:03.914779   24633 request.go:632] Waited for 195.388568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:54:03.914859   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/namespaces/kube-system/pods
	I0906 18:54:03.914870   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:03.914881   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:03.914890   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:03.921370   24633 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0906 18:54:03.930987   24633 system_pods.go:86] 24 kube-system pods found
	I0906 18:54:03.931020   24633 system_pods.go:89] "coredns-6f6b679f8f-gccvh" [9b7c0e1a-3359-4f9f-826c-b75cbdfcd500] Running
	I0906 18:54:03.931027   24633 system_pods.go:89] "coredns-6f6b679f8f-gk28z" [ab595ef6-eaa8-44a0-bdad-ddd59c8d052d] Running
	I0906 18:54:03.931031   24633 system_pods.go:89] "etcd-ha-313128" [b4550b86-3359-44e6-a495-7db003a1bb95] Running
	I0906 18:54:03.931035   24633 system_pods.go:89] "etcd-ha-313128-m02" [d42fd6b2-4ecd-49e8-b5b2-5b29fabe2d1e] Running
	I0906 18:54:03.931039   24633 system_pods.go:89] "etcd-ha-313128-m03" [389e0f5d-34fa-40ff-bba5-079485a68d04] Running
	I0906 18:54:03.931043   24633 system_pods.go:89] "kindnet-h2trt" [90af3550-1fae-46bd-9329-f185fcdb23c6] Running
	I0906 18:54:03.931046   24633 system_pods.go:89] "kindnet-jl257" [0c8c46d5-9a1f-40c6-823e-3e0afca658c5] Running
	I0906 18:54:03.931050   24633 system_pods.go:89] "kindnet-t65ls" [657498aa-b76e-4eb2-abbe-5d8a050fc415] Running
	I0906 18:54:03.931059   24633 system_pods.go:89] "kube-apiserver-ha-313128" [081ff647-e9c5-4cce-895a-e5e660db1acc] Running
	I0906 18:54:03.931064   24633 system_pods.go:89] "kube-apiserver-ha-313128-m02" [bed2808b-bef1-4fd4-a811-762a5ff46343] Running
	I0906 18:54:03.931069   24633 system_pods.go:89] "kube-apiserver-ha-313128-m03" [df855b79-c920-42c5-a8c2-d4d97c4d0fed] Running
	I0906 18:54:03.931076   24633 system_pods.go:89] "kube-controller-manager-ha-313128" [ba0308a5-06d8-468b-a3a1-e95a28a52dd7] Running
	I0906 18:54:03.931082   24633 system_pods.go:89] "kube-controller-manager-ha-313128-m02" [3f4032ce-c8b4-4a2c-9384-82d5d5ec0874] Running
	I0906 18:54:03.931087   24633 system_pods.go:89] "kube-controller-manager-ha-313128-m03" [4f975f72-075c-43dd-b104-bdf5172f45ed] Running
	I0906 18:54:03.931097   24633 system_pods.go:89] "kube-proxy-gfjr7" [2fb5a899-48c8-4e96-ac8e-b77570ecaf26] Running
	I0906 18:54:03.931102   24633 system_pods.go:89] "kube-proxy-h5xn7" [e45358c5-398e-4d33-9bd0-a4f28ce17ac9] Running
	I0906 18:54:03.931106   24633 system_pods.go:89] "kube-proxy-xjp6p" [0cbbf003-361c-441e-a2fe-18783999b020] Running
	I0906 18:54:03.931111   24633 system_pods.go:89] "kube-scheduler-ha-313128" [8580599b-125e-4a2f-9019-41b305c0f611] Running
	I0906 18:54:03.931118   24633 system_pods.go:89] "kube-scheduler-ha-313128-m02" [81cb0c5f-7e54-4e8c-b089-d6a4e2c9cbf0] Running
	I0906 18:54:03.931121   24633 system_pods.go:89] "kube-scheduler-ha-313128-m03" [a49687b2-124f-49c7-abfe-5e401ebabc1f] Running
	I0906 18:54:03.931127   24633 system_pods.go:89] "kube-vip-ha-313128" [6e270949-38fe-475f-b902-ede9d2cb795f] Running
	I0906 18:54:03.931131   24633 system_pods.go:89] "kube-vip-ha-313128-m02" [949996a0-0ce0-4ce4-b9ec-86c8f35a4a96] Running
	I0906 18:54:03.931139   24633 system_pods.go:89] "kube-vip-ha-313128-m03" [867dc2d0-034e-45d9-b3c2-72179e58597e] Running
	I0906 18:54:03.931147   24633 system_pods.go:89] "storage-provisioner" [6c957eac-7904-4c39-b858-bfb7da32c75c] Running
	I0906 18:54:03.931155   24633 system_pods.go:126] duration metric: took 211.85328ms to wait for k8s-apps to be running ...
	I0906 18:54:03.931167   24633 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:54:03.931222   24633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:54:03.948768   24633 system_svc.go:56] duration metric: took 17.590976ms WaitForService to wait for kubelet
	I0906 18:54:03.948803   24633 kubeadm.go:582] duration metric: took 21.509787394s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:54:03.948831   24633 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:54:04.114236   24633 request.go:632] Waited for 165.302052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.70:8443/api/v1/nodes
	I0906 18:54:04.114297   24633 round_trippers.go:463] GET https://192.168.39.70:8443/api/v1/nodes
	I0906 18:54:04.114303   24633 round_trippers.go:469] Request Headers:
	I0906 18:54:04.114310   24633 round_trippers.go:473]     Accept: application/json, */*
	I0906 18:54:04.114313   24633 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0906 18:54:04.118103   24633 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0906 18:54:04.119134   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:54:04.119155   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:54:04.119171   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:54:04.119174   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:54:04.119178   24633 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 18:54:04.119181   24633 node_conditions.go:123] node cpu capacity is 2
	I0906 18:54:04.119186   24633 node_conditions.go:105] duration metric: took 170.348782ms to run NodePressure ...
	I0906 18:54:04.119199   24633 start.go:241] waiting for startup goroutines ...
	I0906 18:54:04.119227   24633 start.go:255] writing updated cluster config ...
	I0906 18:54:04.119521   24633 ssh_runner.go:195] Run: rm -f paused
	I0906 18:54:04.170894   24633 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 18:54:04.173352   24633 out.go:177] * Done! kubectl is now configured to use "ha-313128" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.684258385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649118684230527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=468e4c11-0abb-41bb-b938-f8fcafabde9e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.685092351Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ab88cb6-0c42-4069-8300-a584133eefb3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.685163503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ab88cb6-0c42-4069-8300-a584133eefb3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.685393214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725648847674894407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704565865782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d,PodSandboxId:b08178bcf1de75f873c948d3e6641dc5d0ae48e4b5420eebfad85d8caabda791,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725648704521266152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704439858159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-33
59-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725648692553241731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172564869
0396327444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d,PodSandboxId:68a537b5386bf0dc2a954b946f2a376eea0d8d10ec6e3b2ab4c6e6f1f7dbebd8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172564868078
7067794,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4632629df72b4c4f23c3be823465189,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f,PodSandboxId:b9f62786c7a95e9ef333ad31c2626202c9a1de9167e00facaad0a995ca9f4799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725648679066355632,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387,PodSandboxId:9fee72e04c13707f52815f360e30c0db2e46b810e8ce54b184507a5ce3f1d06d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725648679036550947,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725648678969341073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725648678980034761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ab88cb6-0c42-4069-8300-a584133eefb3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.725408686Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44446177-86b2-4c0b-a032-5426fbdeee17 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.725608236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44446177-86b2-4c0b-a032-5426fbdeee17 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.727271299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5e84666-305f-4df5-a2be-f2459bca0138 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.727860363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649118727833703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5e84666-305f-4df5-a2be-f2459bca0138 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.728415025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05d465d5-3a62-4077-b508-e0b2ee350e80 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.728526111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05d465d5-3a62-4077-b508-e0b2ee350e80 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.728762964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725648847674894407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704565865782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d,PodSandboxId:b08178bcf1de75f873c948d3e6641dc5d0ae48e4b5420eebfad85d8caabda791,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725648704521266152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704439858159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-33
59-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725648692553241731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172564869
0396327444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d,PodSandboxId:68a537b5386bf0dc2a954b946f2a376eea0d8d10ec6e3b2ab4c6e6f1f7dbebd8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172564868078
7067794,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4632629df72b4c4f23c3be823465189,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f,PodSandboxId:b9f62786c7a95e9ef333ad31c2626202c9a1de9167e00facaad0a995ca9f4799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725648679066355632,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387,PodSandboxId:9fee72e04c13707f52815f360e30c0db2e46b810e8ce54b184507a5ce3f1d06d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725648679036550947,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725648678969341073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725648678980034761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05d465d5-3a62-4077-b508-e0b2ee350e80 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.773713239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c6c4797-dcdc-46d0-823e-4c25b9e56a04 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.773804129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c6c4797-dcdc-46d0-823e-4c25b9e56a04 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.775248185Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f177967-066d-4d5d-a23e-6541503770b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.775856309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649118775831072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f177967-066d-4d5d-a23e-6541503770b7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.776533736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54e77abc-9252-4a44-a961-b812e2cbe257 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.776606773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54e77abc-9252-4a44-a961-b812e2cbe257 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.776849408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725648847674894407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704565865782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d,PodSandboxId:b08178bcf1de75f873c948d3e6641dc5d0ae48e4b5420eebfad85d8caabda791,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725648704521266152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704439858159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-33
59-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725648692553241731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172564869
0396327444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d,PodSandboxId:68a537b5386bf0dc2a954b946f2a376eea0d8d10ec6e3b2ab4c6e6f1f7dbebd8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172564868078
7067794,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4632629df72b4c4f23c3be823465189,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f,PodSandboxId:b9f62786c7a95e9ef333ad31c2626202c9a1de9167e00facaad0a995ca9f4799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725648679066355632,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387,PodSandboxId:9fee72e04c13707f52815f360e30c0db2e46b810e8ce54b184507a5ce3f1d06d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725648679036550947,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725648678969341073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725648678980034761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54e77abc-9252-4a44-a961-b812e2cbe257 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.821442667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2295fd6-4328-45e4-aae9-5a4a1d68a714 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.821574276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2295fd6-4328-45e4-aae9-5a4a1d68a714 name=/runtime.v1.RuntimeService/Version
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.823061787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5748737-76b9-4d93-b3c2-6d10f01b8dd3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.824940602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649118824904949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5748737-76b9-4d93-b3c2-6d10f01b8dd3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.825591900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ee5c19e-9c4b-43d1-9483-79da5097aa79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.825692291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ee5c19e-9c4b-43d1-9483-79da5097aa79 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 18:58:38 ha-313128 crio[668]: time="2024-09-06 18:58:38.826224707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725648847674894407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704565865782,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d,PodSandboxId:b08178bcf1de75f873c948d3e6641dc5d0ae48e4b5420eebfad85d8caabda791,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725648704521266152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725648704439858159,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-33
59-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1725648692553241731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172564869
0396327444,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d,PodSandboxId:68a537b5386bf0dc2a954b946f2a376eea0d8d10ec6e3b2ab4c6e6f1f7dbebd8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172564868078
7067794,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4632629df72b4c4f23c3be823465189,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f,PodSandboxId:b9f62786c7a95e9ef333ad31c2626202c9a1de9167e00facaad0a995ca9f4799,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725648679066355632,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387,PodSandboxId:9fee72e04c13707f52815f360e30c0db2e46b810e8ce54b184507a5ce3f1d06d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725648679036550947,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725648678969341073,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725648678980034761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ee5c19e-9c4b-43d1-9483-79da5097aa79 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b3f2cd2f6c9c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   74b84ec8f17a7       busybox-7dff88458-s2cgz
	5b950806bc4b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   9151daea570f3       coredns-6f6b679f8f-gk28z
	ffd27ffbc9742       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   b08178bcf1de7       storage-provisioner
	76bbd732b8695       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   8449d8c8bfa3e       coredns-6f6b679f8f-gccvh
	76ca94f153009       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   a3128d8e090be       kindnet-h2trt
	135074e446370       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   dde7791c0770a       kube-proxy-h5xn7
	13b08e833a9ce       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   68a537b5386bf       kube-vip-ha-313128
	7f7c5c81b9e05       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   b9f62786c7a95       kube-controller-manager-ha-313128
	9a30d709b3b92       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   9fee72e04c137       kube-apiserver-ha-313128
	e32b22b9f83ac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   0ced27e2ded46       etcd-ha-313128
	a406aeec43303       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   aeb85ed29ab1d       kube-scheduler-ha-313128
	
	
	==> coredns [5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939] <==
	[INFO] 10.244.1.2:46138 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212814s
	[INFO] 10.244.1.2:37199 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142816s
	[INFO] 10.244.1.2:59435 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134263s
	[INFO] 10.244.2.2:55641 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152123s
	[INFO] 10.244.2.2:44100 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142405s
	[INFO] 10.244.2.2:36497 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120125s
	[INFO] 10.244.2.2:48348 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089194s
	[INFO] 10.244.2.2:54108 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074315s
	[INFO] 10.244.0.4:40347 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182567s
	[INFO] 10.244.0.4:52272 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006329s
	[INFO] 10.244.0.4:51714 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082631s
	[INFO] 10.244.1.2:48124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011336s
	[INFO] 10.244.1.2:41760 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105711s
	[INFO] 10.244.2.2:36465 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000146663s
	[INFO] 10.244.2.2:60287 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114443s
	[INFO] 10.244.0.4:42561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009569s
	[INFO] 10.244.0.4:55114 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086084s
	[INFO] 10.244.0.4:53953 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067022s
	[INFO] 10.244.1.2:48594 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121564s
	[INFO] 10.244.1.2:53114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166914s
	[INFO] 10.244.2.2:34659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158468s
	[INFO] 10.244.2.2:34171 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176512s
	[INFO] 10.244.0.4:58990 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009694s
	[INFO] 10.244.0.4:43562 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118003s
	[INFO] 10.244.0.4:33609 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086781s
	
	
	==> coredns [76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa] <==
	[INFO] 10.244.2.2:49198 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000157417s
	[INFO] 10.244.2.2:45279 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001726225s
	[INFO] 10.244.0.4:43649 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000087809s
	[INFO] 10.244.0.4:48739 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001332361s
	[INFO] 10.244.1.2:58049 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00030226s
	[INFO] 10.244.1.2:40610 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.031276485s
	[INFO] 10.244.1.2:56981 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192216s
	[INFO] 10.244.2.2:34827 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217382s
	[INFO] 10.244.2.2:57219 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001699092s
	[INFO] 10.244.2.2:58659 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001077242s
	[INFO] 10.244.0.4:54771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075932s
	[INFO] 10.244.0.4:36423 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001645163s
	[INFO] 10.244.0.4:44712 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063493s
	[INFO] 10.244.0.4:58952 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001116094s
	[INFO] 10.244.0.4:58673 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091919s
	[INFO] 10.244.1.2:35244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089298s
	[INFO] 10.244.1.2:54461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083864s
	[INFO] 10.244.2.2:46046 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126212s
	[INFO] 10.244.2.2:45762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078805s
	[INFO] 10.244.0.4:56166 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109081s
	[INFO] 10.244.1.2:44485 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175559s
	[INFO] 10.244.1.2:60331 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113433s
	[INFO] 10.244.2.2:33944 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094759s
	[INFO] 10.244.2.2:54249 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007626s
	[INFO] 10.244.0.4:34049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091783s
	
	
	==> describe nodes <==
	Name:               ha-313128
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_51_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:58:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:54:29 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:54:29 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:54:29 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:54:29 +0000   Fri, 06 Sep 2024 18:51:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-313128
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a8374058d8a4ce69ddf9d9b9a6bab88
	  System UUID:                5a837405-8d8a-4ce6-9ddf-9d9b9a6bab88
	  Boot ID:                    4ac8491f-e614-44c2-96e0-f1733bbe0f17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s2cgz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 coredns-6f6b679f8f-gccvh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m10s
	  kube-system                 coredns-6f6b679f8f-gk28z             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m10s
	  kube-system                 etcd-ha-313128                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m14s
	  kube-system                 kindnet-h2trt                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m10s
	  kube-system                 kube-apiserver-ha-313128             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-controller-manager-ha-313128    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-proxy-h5xn7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-scheduler-ha-313128             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-vip-ha-313128                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m8s   kube-proxy       
	  Normal  Starting                 7m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m14s  kubelet          Node ha-313128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m14s  kubelet          Node ha-313128 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m14s  kubelet          Node ha-313128 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m11s  node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal  NodeReady                6m56s  kubelet          Node ha-313128 status is now: NodeReady
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal  RegisteredNode           4m52s  node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	
	
	Name:               ha-313128-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_52_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:52:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:55:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 18:54:21 +0000   Fri, 06 Sep 2024 18:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 18:54:21 +0000   Fri, 06 Sep 2024 18:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 18:54:21 +0000   Fri, 06 Sep 2024 18:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 18:54:21 +0000   Fri, 06 Sep 2024 18:55:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    ha-313128-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9324a423f4b54997b7d3837f23afbaaf
	  System UUID:                9324a423-f4b5-4997-b7d3-837f23afbaaf
	  Boot ID:                    5b6464a0-918c-48fa-869b-49bf49ced3f9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-54m66                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 etcd-ha-313128-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-t65ls                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-313128-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-313128-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-proxy-xjp6p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-313128-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-vip-ha-313128-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m22s)  kubelet          Node ha-313128-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m22s)  kubelet          Node ha-313128-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m22s)  kubelet          Node ha-313128-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  NodeNotReady             2m47s                  node-controller  Node ha-313128-m02 status is now: NodeNotReady
	
	
	Name:               ha-313128-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_53_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:53:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:58:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:54:09 +0000   Fri, 06 Sep 2024 18:53:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:54:09 +0000   Fri, 06 Sep 2024 18:53:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:54:09 +0000   Fri, 06 Sep 2024 18:53:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:54:09 +0000   Fri, 06 Sep 2024 18:53:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    ha-313128-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d33107b982c427ca47333d2971ade3a
	  System UUID:                1d33107b-982c-427c-a473-33d2971ade3a
	  Boot ID:                    b026d73c-eaf0-4a0e-9fe3-8e30ea0ed740
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k99v6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 etcd-ha-313128-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m59s
	  kube-system                 kindnet-jl257                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m1s
	  kube-system                 kube-apiserver-ha-313128-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-controller-manager-ha-313128-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-proxy-gfjr7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-ha-313128-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-vip-ha-313128-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m54s                kube-proxy       
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	  Normal  NodeHasSufficientMemory  5m1s (x8 over 5m1s)  kubelet          Node ha-313128-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m1s (x8 over 5m1s)  kubelet          Node ha-313128-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m1s (x7 over 5m1s)  kubelet          Node ha-313128-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s                node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	  Normal  RegisteredNode           4m52s                node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	
	
	Name:               ha-313128-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_54_39_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:54:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:58:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 18:54:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 18:54:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 18:54:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 18:54:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-313128-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1284faaf1604a6db25bba3bb7ed5953
	  System UUID:                f1284faa-f160-4a6d-b25b-ba3bb7ed5953
	  Boot ID:                    25844c67-e2f9-444b-99b9-94b7e385f59f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fsbs9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m
	  kube-system                 kube-proxy-8tm7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3m55s              kube-proxy       
	  Normal  NodeAllocatableEnforced  4m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m (x2 over 4m1s)  kubelet          Node ha-313128-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x2 over 4m1s)  kubelet          Node ha-313128-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x2 over 4m1s)  kubelet          Node ha-313128-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m57s              node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           3m56s              node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           3m56s              node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  NodeReady                3m41s              kubelet          Node ha-313128-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 6 18:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050608] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040146] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.800760] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.489418] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.624819] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 18:51] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072122] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.201564] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.131661] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.284243] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.067260] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.541515] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.060417] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251462] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.088029] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.073110] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.070796] kauditd_printk_skb: 38 callbacks suppressed
	[Sep 6 18:52] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8] <==
	{"level":"warn","ts":"2024-09-06T18:58:39.116617Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.120240Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.133655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.140395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.146695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.150871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.154308Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.157871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.158181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.159865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.165944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.172606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.178751Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.178969Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.182238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.190715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.196207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.201845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.205030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.207816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.211188Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.218685Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.240272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.259015Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T18:58:39.294079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:58:39 up 7 min,  0 users,  load average: 0.10, 0.17, 0.10
	Linux ha-313128 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b] <==
	I0906 18:58:03.780106       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 18:58:13.776559       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 18:58:13.776675       1 main.go:299] handling current node
	I0906 18:58:13.776710       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 18:58:13.776729       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 18:58:13.776909       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 18:58:13.776942       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 18:58:13.777024       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 18:58:13.777043       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 18:58:23.778079       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 18:58:23.778195       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 18:58:23.778362       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 18:58:23.778402       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 18:58:23.778552       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 18:58:23.778587       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 18:58:23.778654       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 18:58:23.778672       1 main.go:299] handling current node
	I0906 18:58:33.769949       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 18:58:33.770112       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 18:58:33.770309       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 18:58:33.770337       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 18:58:33.770415       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 18:58:33.770435       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 18:58:33.770583       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 18:58:33.770664       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387] <==
	I0906 18:51:25.309945       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0906 18:51:25.457042       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 18:51:29.747411       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0906 18:51:29.810662       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0906 18:52:18.859356       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0906 18:52:18.859680       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 13.582µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0906 18:52:18.860826       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0906 18:52:18.862134       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0906 18:52:18.863594       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.388495ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0906 18:54:08.665104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47100: use of closed network connection
	E0906 18:54:08.848600       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47110: use of closed network connection
	E0906 18:54:09.036986       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47126: use of closed network connection
	E0906 18:54:09.270200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47144: use of closed network connection
	E0906 18:54:09.458294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47168: use of closed network connection
	E0906 18:54:09.652563       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47176: use of closed network connection
	E0906 18:54:09.835588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47200: use of closed network connection
	E0906 18:54:10.009376       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47218: use of closed network connection
	E0906 18:54:10.188152       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47226: use of closed network connection
	E0906 18:54:10.471964       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47262: use of closed network connection
	E0906 18:54:10.648030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47290: use of closed network connection
	E0906 18:54:10.836992       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47306: use of closed network connection
	E0906 18:54:11.006159       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47322: use of closed network connection
	E0906 18:54:11.181297       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47338: use of closed network connection
	E0906 18:54:11.366046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:47362: use of closed network connection
	W0906 18:55:33.918550       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.172 192.168.39.70]
	
	
	==> kube-controller-manager [7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f] <==
	I0906 18:54:39.045844       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-313128-m04" podCIDRs=["10.244.3.0/24"]
	I0906 18:54:39.045912       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:39.046233       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:39.066260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:39.338155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:39.726463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:42.699047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:43.120944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:43.221197       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:44.209153       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-313128-m04"
	I0906 18:54:44.210914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:44.405082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:49.440885       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:58.169228       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-313128-m04"
	I0906 18:54:58.169562       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:58.189627       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:54:59.226373       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:55:09.691672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 18:55:52.673339       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-313128-m04"
	I0906 18:55:52.673535       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 18:55:52.703026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 18:55:52.797314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.734822ms"
	I0906 18:55:52.797410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.591µs"
	I0906 18:55:54.272892       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 18:55:57.899341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	
	
	==> kube-proxy [135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 18:51:30.682674       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 18:51:30.696155       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.70"]
	E0906 18:51:30.696248       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:51:30.742708       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 18:51:30.742748       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 18:51:30.742776       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:51:30.746442       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:51:30.746885       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:51:30.747126       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:51:30.748722       1 config.go:197] "Starting service config controller"
	I0906 18:51:30.748777       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:51:30.748818       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:51:30.748834       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:51:30.756676       1 config.go:326] "Starting node config controller"
	I0906 18:51:30.756705       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:51:30.849938       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 18:51:30.850008       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:51:30.856862       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f] <==
	I0906 18:51:25.792122       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 18:53:38.426718       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-jl257\": pod kindnet-jl257 is already assigned to node \"ha-313128-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-jl257" node="ha-313128-m03"
	E0906 18:53:38.426941       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-jl257\": pod kindnet-jl257 is already assigned to node \"ha-313128-m03\"" pod="kube-system/kindnet-jl257"
	I0906 18:53:38.427016       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-jl257" node="ha-313128-m03"
	E0906 18:53:38.516668       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ll952\": pod kindnet-ll952 is already assigned to node \"ha-313128-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-ll952" node="ha-313128-m03"
	E0906 18:53:38.516957       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 80638b6e-9eca-4abb-a3df-4b95fc931417(kube-system/kindnet-ll952) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-ll952"
	E0906 18:53:38.517065       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ll952\": pod kindnet-ll952 is already assigned to node \"ha-313128-m03\"" pod="kube-system/kindnet-ll952"
	I0906 18:53:38.517106       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ll952" node="ha-313128-m03"
	E0906 18:54:05.046409       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k99v6\": pod busybox-7dff88458-k99v6 is already assigned to node \"ha-313128-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-k99v6" node="ha-313128-m02"
	E0906 18:54:05.046651       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-k99v6\": pod busybox-7dff88458-k99v6 is already assigned to node \"ha-313128-m03\"" pod="default/busybox-7dff88458-k99v6"
	E0906 18:54:05.096920       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-54m66\": pod busybox-7dff88458-54m66 is already assigned to node \"ha-313128-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-54m66" node="ha-313128-m02"
	E0906 18:54:05.096999       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 7267943f-285a-4790-987f-7fac660585fc(default/busybox-7dff88458-54m66) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-54m66"
	E0906 18:54:05.097028       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-54m66\": pod busybox-7dff88458-54m66 is already assigned to node \"ha-313128-m02\"" pod="default/busybox-7dff88458-54m66"
	I0906 18:54:05.097080       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-54m66" node="ha-313128-m02"
	E0906 18:54:39.142976       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8tm7b\": pod kube-proxy-8tm7b is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8tm7b" node="ha-313128-m04"
	E0906 18:54:39.143233       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b3bf864c-151e-4cad-b312-6c93ea87e678(kube-system/kube-proxy-8tm7b) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8tm7b"
	E0906 18:54:39.143315       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8tm7b\": pod kube-proxy-8tm7b is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-8tm7b"
	I0906 18:54:39.143372       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8tm7b" node="ha-313128-m04"
	E0906 18:54:39.143180       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.144192       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fdc10711-7099-424e-885e-65589f5642e5(kube-system/kindnet-k9szn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k9szn"
	E0906 18:54:39.144252       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" pod="kube-system/kindnet-k9szn"
	I0906 18:54:39.144297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.236601       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	E0906 18:54:39.236925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-rnm78"
	I0906 18:54:39.240895       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	
	
	==> kubelet <==
	Sep 06 18:57:25 ha-313128 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 18:57:25 ha-313128 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 18:57:25 ha-313128 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 18:57:25 ha-313128 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 18:57:25 ha-313128 kubelet[1323]: E0906 18:57:25.568772    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649045568171193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:25 ha-313128 kubelet[1323]: E0906 18:57:25.568841    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649045568171193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:35 ha-313128 kubelet[1323]: E0906 18:57:35.571057    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649055570736168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:35 ha-313128 kubelet[1323]: E0906 18:57:35.571088    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649055570736168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:45 ha-313128 kubelet[1323]: E0906 18:57:45.573904    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649065573353770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:45 ha-313128 kubelet[1323]: E0906 18:57:45.573965    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649065573353770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:55 ha-313128 kubelet[1323]: E0906 18:57:55.575839    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649075575325216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:57:55 ha-313128 kubelet[1323]: E0906 18:57:55.576153    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649075575325216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:58:05 ha-313128 kubelet[1323]: E0906 18:58:05.578723    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649085578238344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:58:05 ha-313128 kubelet[1323]: E0906 18:58:05.578750    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649085578238344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:58:15 ha-313128 kubelet[1323]: E0906 18:58:15.580456    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649095580072772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:58:15 ha-313128 kubelet[1323]: E0906 18:58:15.580538    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649095580072772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:58:25 ha-313128 kubelet[1323]: E0906 18:58:25.512800    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 18:58:25 ha-313128 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 18:58:25 ha-313128 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 18:58:25 ha-313128 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 18:58:25 ha-313128 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 18:58:25 ha-313128 kubelet[1323]: E0906 18:58:25.583176    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649105582567853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:58:25 ha-313128 kubelet[1323]: E0906 18:58:25.583240    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649105582567853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:58:35 ha-313128 kubelet[1323]: E0906 18:58:35.586976    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649115586681172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 18:58:35 ha-313128 kubelet[1323]: E0906 18:58:35.587350    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649115586681172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-313128 -n ha-313128
helpers_test.go:261: (dbg) Run:  kubectl --context ha-313128 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (56.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (814.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-313128 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-313128 -v=7 --alsologtostderr
E0906 18:59:49.184724   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:00:16.889303   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-313128 -v=7 --alsologtostderr: exit status 82 (2m1.905050226s)

                                                
                                                
-- stdout --
	* Stopping node "ha-313128-m04"  ...
	* Stopping node "ha-313128-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:58:40.696567   30495 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:58:40.696815   30495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:40.696824   30495 out.go:358] Setting ErrFile to fd 2...
	I0906 18:58:40.696828   30495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:58:40.697079   30495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:58:40.697348   30495 out.go:352] Setting JSON to false
	I0906 18:58:40.697448   30495 mustload.go:65] Loading cluster: ha-313128
	I0906 18:58:40.697824   30495 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:58:40.697923   30495 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 18:58:40.698108   30495 mustload.go:65] Loading cluster: ha-313128
	I0906 18:58:40.698281   30495 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:58:40.698323   30495 stop.go:39] StopHost: ha-313128-m04
	I0906 18:58:40.698734   30495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:40.698783   30495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:40.713717   30495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45609
	I0906 18:58:40.714193   30495 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:40.714739   30495 main.go:141] libmachine: Using API Version  1
	I0906 18:58:40.714762   30495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:40.715142   30495 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:40.717616   30495 out.go:177] * Stopping node "ha-313128-m04"  ...
	I0906 18:58:40.718925   30495 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 18:58:40.718947   30495 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 18:58:40.719172   30495 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 18:58:40.719209   30495 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 18:58:40.722148   30495 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:40.722529   30495 main.go:141] libmachine: (ha-313128-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:5b:b1", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:54:26 +0000 UTC Type:0 Mac:52:54:00:16:5b:b1 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:ha-313128-m04 Clientid:01:52:54:00:16:5b:b1}
	I0906 18:58:40.722558   30495 main.go:141] libmachine: (ha-313128-m04) DBG | domain ha-313128-m04 has defined IP address 192.168.39.68 and MAC address 52:54:00:16:5b:b1 in network mk-ha-313128
	I0906 18:58:40.722771   30495 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHPort
	I0906 18:58:40.722929   30495 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHKeyPath
	I0906 18:58:40.723077   30495 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHUsername
	I0906 18:58:40.723237   30495 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m04/id_rsa Username:docker}
	I0906 18:58:40.809102   30495 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0906 18:58:40.862892   30495 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0906 18:58:40.917563   30495 main.go:141] libmachine: Stopping "ha-313128-m04"...
	I0906 18:58:40.917586   30495 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:58:40.919146   30495 main.go:141] libmachine: (ha-313128-m04) Calling .Stop
	I0906 18:58:40.922738   30495 main.go:141] libmachine: (ha-313128-m04) Waiting for machine to stop 0/120
	I0906 18:58:42.138269   30495 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 18:58:42.139584   30495 main.go:141] libmachine: Machine "ha-313128-m04" was stopped.
	I0906 18:58:42.139603   30495 stop.go:75] duration metric: took 1.420679464s to stop
	I0906 18:58:42.139621   30495 stop.go:39] StopHost: ha-313128-m03
	I0906 18:58:42.139888   30495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:58:42.139921   30495 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:58:42.154836   30495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40129
	I0906 18:58:42.155324   30495 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:58:42.155826   30495 main.go:141] libmachine: Using API Version  1
	I0906 18:58:42.155855   30495 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:58:42.156188   30495 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:58:42.158194   30495 out.go:177] * Stopping node "ha-313128-m03"  ...
	I0906 18:58:42.159603   30495 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 18:58:42.159634   30495 main.go:141] libmachine: (ha-313128-m03) Calling .DriverName
	I0906 18:58:42.159882   30495 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 18:58:42.159914   30495 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHHostname
	I0906 18:58:42.162693   30495 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:42.163102   30495 main.go:141] libmachine: (ha-313128-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b3:07", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:52:57 +0000 UTC Type:0 Mac:52:54:00:90:b3:07 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:ha-313128-m03 Clientid:01:52:54:00:90:b3:07}
	I0906 18:58:42.163134   30495 main.go:141] libmachine: (ha-313128-m03) DBG | domain ha-313128-m03 has defined IP address 192.168.39.172 and MAC address 52:54:00:90:b3:07 in network mk-ha-313128
	I0906 18:58:42.163237   30495 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHPort
	I0906 18:58:42.163410   30495 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHKeyPath
	I0906 18:58:42.163564   30495 main.go:141] libmachine: (ha-313128-m03) Calling .GetSSHUsername
	I0906 18:58:42.163708   30495 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m03/id_rsa Username:docker}
	I0906 18:58:42.252921   30495 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0906 18:58:42.306726   30495 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0906 18:58:42.361841   30495 main.go:141] libmachine: Stopping "ha-313128-m03"...
	I0906 18:58:42.361869   30495 main.go:141] libmachine: (ha-313128-m03) Calling .GetState
	I0906 18:58:42.363518   30495 main.go:141] libmachine: (ha-313128-m03) Calling .Stop
	I0906 18:58:42.367240   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 0/120
	I0906 18:58:43.368652   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 1/120
	I0906 18:58:44.369875   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 2/120
	I0906 18:58:45.371344   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 3/120
	I0906 18:58:46.372712   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 4/120
	I0906 18:58:47.374645   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 5/120
	I0906 18:58:48.376055   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 6/120
	I0906 18:58:49.377561   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 7/120
	I0906 18:58:50.379296   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 8/120
	I0906 18:58:51.380566   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 9/120
	I0906 18:58:52.382134   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 10/120
	I0906 18:58:53.383804   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 11/120
	I0906 18:58:54.385150   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 12/120
	I0906 18:58:55.386751   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 13/120
	I0906 18:58:56.387943   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 14/120
	I0906 18:58:57.389659   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 15/120
	I0906 18:58:58.391197   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 16/120
	I0906 18:58:59.392407   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 17/120
	I0906 18:59:00.394002   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 18/120
	I0906 18:59:01.395275   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 19/120
	I0906 18:59:02.397173   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 20/120
	I0906 18:59:03.399507   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 21/120
	I0906 18:59:04.401272   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 22/120
	I0906 18:59:05.402856   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 23/120
	I0906 18:59:06.404098   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 24/120
	I0906 18:59:07.406104   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 25/120
	I0906 18:59:08.407357   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 26/120
	I0906 18:59:09.408813   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 27/120
	I0906 18:59:10.410121   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 28/120
	I0906 18:59:11.411506   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 29/120
	I0906 18:59:12.412935   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 30/120
	I0906 18:59:13.414243   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 31/120
	I0906 18:59:14.415716   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 32/120
	I0906 18:59:15.416983   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 33/120
	I0906 18:59:16.418456   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 34/120
	I0906 18:59:17.420397   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 35/120
	I0906 18:59:18.421698   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 36/120
	I0906 18:59:19.423151   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 37/120
	I0906 18:59:20.424470   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 38/120
	I0906 18:59:21.426021   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 39/120
	I0906 18:59:22.427784   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 40/120
	I0906 18:59:23.429003   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 41/120
	I0906 18:59:24.430398   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 42/120
	I0906 18:59:25.431568   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 43/120
	I0906 18:59:26.433113   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 44/120
	I0906 18:59:27.435330   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 45/120
	I0906 18:59:28.436662   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 46/120
	I0906 18:59:29.438199   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 47/120
	I0906 18:59:30.439717   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 48/120
	I0906 18:59:31.441676   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 49/120
	I0906 18:59:32.443442   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 50/120
	I0906 18:59:33.444909   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 51/120
	I0906 18:59:34.446421   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 52/120
	I0906 18:59:35.447726   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 53/120
	I0906 18:59:36.449058   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 54/120
	I0906 18:59:37.451046   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 55/120
	I0906 18:59:38.453443   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 56/120
	I0906 18:59:39.454769   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 57/120
	I0906 18:59:40.456311   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 58/120
	I0906 18:59:41.458397   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 59/120
	I0906 18:59:42.460308   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 60/120
	I0906 18:59:43.461724   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 61/120
	I0906 18:59:44.463575   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 62/120
	I0906 18:59:45.464664   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 63/120
	I0906 18:59:46.466168   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 64/120
	I0906 18:59:47.468346   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 65/120
	I0906 18:59:48.469648   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 66/120
	I0906 18:59:49.471292   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 67/120
	I0906 18:59:50.472643   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 68/120
	I0906 18:59:51.473892   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 69/120
	I0906 18:59:52.475652   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 70/120
	I0906 18:59:53.477275   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 71/120
	I0906 18:59:54.479464   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 72/120
	I0906 18:59:55.480807   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 73/120
	I0906 18:59:56.482396   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 74/120
	I0906 18:59:57.484281   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 75/120
	I0906 18:59:58.485464   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 76/120
	I0906 18:59:59.486786   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 77/120
	I0906 19:00:00.488121   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 78/120
	I0906 19:00:01.489653   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 79/120
	I0906 19:00:02.490906   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 80/120
	I0906 19:00:03.492556   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 81/120
	I0906 19:00:04.493912   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 82/120
	I0906 19:00:05.495520   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 83/120
	I0906 19:00:06.496850   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 84/120
	I0906 19:00:07.498661   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 85/120
	I0906 19:00:08.500023   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 86/120
	I0906 19:00:09.501302   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 87/120
	I0906 19:00:10.503200   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 88/120
	I0906 19:00:11.504409   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 89/120
	I0906 19:00:12.505779   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 90/120
	I0906 19:00:13.507471   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 91/120
	I0906 19:00:14.509273   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 92/120
	I0906 19:00:15.510791   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 93/120
	I0906 19:00:16.512600   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 94/120
	I0906 19:00:17.514429   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 95/120
	I0906 19:00:18.515845   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 96/120
	I0906 19:00:19.517045   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 97/120
	I0906 19:00:20.518358   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 98/120
	I0906 19:00:21.519705   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 99/120
	I0906 19:00:22.521601   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 100/120
	I0906 19:00:23.523285   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 101/120
	I0906 19:00:24.524746   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 102/120
	I0906 19:00:25.526353   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 103/120
	I0906 19:00:26.527711   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 104/120
	I0906 19:00:27.529760   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 105/120
	I0906 19:00:28.531157   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 106/120
	I0906 19:00:29.532654   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 107/120
	I0906 19:00:30.533963   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 108/120
	I0906 19:00:31.535413   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 109/120
	I0906 19:00:32.537391   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 110/120
	I0906 19:00:33.538890   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 111/120
	I0906 19:00:34.540330   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 112/120
	I0906 19:00:35.541933   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 113/120
	I0906 19:00:36.543261   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 114/120
	I0906 19:00:37.545260   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 115/120
	I0906 19:00:38.546507   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 116/120
	I0906 19:00:39.547880   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 117/120
	I0906 19:00:40.549153   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 118/120
	I0906 19:00:41.550694   30495 main.go:141] libmachine: (ha-313128-m03) Waiting for machine to stop 119/120
	I0906 19:00:42.551512   30495 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0906 19:00:42.551576   30495 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0906 19:00:42.553704   30495 out.go:201] 
	W0906 19:00:42.555245   30495 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0906 19:00:42.555264   30495 out.go:270] * 
	* 
	W0906 19:00:42.558436   30495 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 19:00:42.559858   30495 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-313128 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-313128 --wait=true -v=7 --alsologtostderr
E0906 19:01:44.179006   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:03:07.247383   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:04:49.184529   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:06:44.178522   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:09:49.183814   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:11:12.250829   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:11:44.178952   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-313128 --wait=true -v=7 --alsologtostderr: exit status 80 (11m29.444088777s)

                                                
                                                
-- stdout --
	* [ha-313128] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-313128" primary control-plane node in "ha-313128" cluster
	* Updating the running kvm2 "ha-313128" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-313128-m02" control-plane node in "ha-313128" cluster
	* Restarting existing kvm2 VM for "ha-313128-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.70
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.70
	* Verifying Kubernetes components...
	
	* Starting "ha-313128-m03" control-plane node in "ha-313128" cluster
	* Restarting existing kvm2 VM for "ha-313128-m03" ...
	* Found network options:
	  - NO_PROXY=192.168.39.70,192.168.39.32
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.70
	  - env NO_PROXY=192.168.39.70,192.168.39.32
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:00:42.604662   30973 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:00:42.604922   30973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:42.604931   30973 out.go:358] Setting ErrFile to fd 2...
	I0906 19:00:42.604937   30973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:42.605118   30973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:00:42.605712   30973 out.go:352] Setting JSON to false
	I0906 19:00:42.606606   30973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2592,"bootTime":1725646651,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:00:42.606669   30973 start.go:139] virtualization: kvm guest
	I0906 19:00:42.609026   30973 out.go:177] * [ha-313128] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:00:42.610315   30973 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:00:42.610320   30973 notify.go:220] Checking for updates...
	I0906 19:00:42.612626   30973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:00:42.614046   30973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:00:42.615697   30973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:00:42.617289   30973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:00:42.618880   30973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:00:42.620642   30973 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:00:42.620737   30973 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:00:42.621181   30973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:00:42.621247   30973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:42.636849   30973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0906 19:00:42.637263   30973 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:42.637848   30973 main.go:141] libmachine: Using API Version  1
	I0906 19:00:42.637868   30973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:42.638214   30973 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:42.638435   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.676963   30973 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 19:00:42.678406   30973 start.go:297] selected driver: kvm2
	I0906 19:00:42.678423   30973 start.go:901] validating driver "kvm2" against &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default A
PIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headl
amp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:00:42.678622   30973 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:00:42.678996   30973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:00:42.679070   30973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:00:42.694855   30973 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:00:42.695667   30973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:00:42.695733   30973 cni.go:84] Creating CNI manager for ""
	I0906 19:00:42.695746   30973 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 19:00:42.695799   30973 start.go:340] cluster config:
	{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:00:42.695915   30973 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:00:42.698090   30973 out.go:177] * Starting "ha-313128" primary control-plane node in "ha-313128" cluster
	I0906 19:00:42.699706   30973 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:00:42.699746   30973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:00:42.699754   30973 cache.go:56] Caching tarball of preloaded images
	I0906 19:00:42.699837   30973 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:00:42.699848   30973 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:00:42.699961   30973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 19:00:42.700160   30973 start.go:360] acquireMachinesLock for ha-313128: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:00:42.700217   30973 start.go:364] duration metric: took 31.95µs to acquireMachinesLock for "ha-313128"
	I0906 19:00:42.700243   30973 start.go:96] Skipping create...Using existing machine configuration
	I0906 19:00:42.700253   30973 fix.go:54] fixHost starting: 
	I0906 19:00:42.700615   30973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:00:42.700669   30973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:42.715246   30973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0906 19:00:42.715721   30973 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:42.716296   30973 main.go:141] libmachine: Using API Version  1
	I0906 19:00:42.716319   30973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:42.716656   30973 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:42.716872   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.717048   30973 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 19:00:42.718801   30973 fix.go:112] recreateIfNeeded on ha-313128: state=Running err=<nil>
	W0906 19:00:42.718818   30973 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 19:00:42.722091   30973 out.go:177] * Updating the running kvm2 "ha-313128" VM ...
	I0906 19:00:42.723320   30973 machine.go:93] provisionDockerMachine start ...
	I0906 19:00:42.723341   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.723593   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.726581   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.727062   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.727086   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.727274   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.727450   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.727600   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.727717   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.727841   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.728035   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.728049   30973 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 19:00:42.842622   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 19:00:42.842652   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:42.842912   30973 buildroot.go:166] provisioning hostname "ha-313128"
	I0906 19:00:42.842943   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:42.843128   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.845900   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.846338   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.846367   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.846533   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.846705   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.846862   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.846998   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.847138   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.847339   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.847355   30973 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128 && echo "ha-313128" | sudo tee /etc/hostname
	I0906 19:00:42.971699   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 19:00:42.971726   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.974199   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.974577   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.974616   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.974777   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.974955   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.975110   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.975250   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.975389   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.975547   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.975561   30973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:00:43.086298   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:00:43.086336   30973 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:00:43.086383   30973 buildroot.go:174] setting up certificates
	I0906 19:00:43.086397   30973 provision.go:84] configureAuth start
	I0906 19:00:43.086411   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:43.086768   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:00:43.089761   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.090172   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.090221   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.090371   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.092707   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.093131   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.093150   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.093281   30973 provision.go:143] copyHostCerts
	I0906 19:00:43.093308   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:00:43.093346   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:00:43.093371   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:00:43.093449   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:00:43.093549   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:00:43.093574   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:00:43.093581   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:00:43.093618   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:00:43.093687   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:00:43.093709   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:00:43.093714   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:00:43.093750   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:00:43.093833   30973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128 san=[127.0.0.1 192.168.39.70 ha-313128 localhost minikube]
	I0906 19:00:43.258285   30973 provision.go:177] copyRemoteCerts
	I0906 19:00:43.258366   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:00:43.258394   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.260947   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.261383   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.261412   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.261600   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:43.261791   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.261926   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:43.262075   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:00:43.348224   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 19:00:43.348285   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:00:43.374716   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 19:00:43.374792   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0906 19:00:43.403028   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 19:00:43.403095   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 19:00:43.428263   30973 provision.go:87] duration metric: took 341.855389ms to configureAuth
	I0906 19:00:43.428293   30973 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:00:43.428524   30973 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:00:43.428598   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.431629   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.432063   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.432090   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.432269   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:43.432477   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.432645   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.432802   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:43.432969   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:43.433127   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:43.433144   30973 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:02:14.266261   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:02:14.266292   30973 machine.go:96] duration metric: took 1m31.542957549s to provisionDockerMachine
	I0906 19:02:14.266304   30973 start.go:293] postStartSetup for "ha-313128" (driver="kvm2")
	I0906 19:02:14.266315   30973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:02:14.266329   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.266669   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:02:14.266694   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.270021   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.270486   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.270511   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.270640   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.270873   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.271053   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.271182   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.357410   30973 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:02:14.362343   30973 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:02:14.362367   30973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:02:14.362428   30973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:02:14.362506   30973 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:02:14.362518   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 19:02:14.362611   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:02:14.372770   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:02:14.400357   30973 start.go:296] duration metric: took 134.040576ms for postStartSetup
	I0906 19:02:14.400419   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.400730   30973 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 19:02:14.400755   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.403411   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.403817   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.403842   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.403988   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.404164   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.404325   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.404472   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	W0906 19:02:14.487375   30973 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0906 19:02:14.487427   30973 fix.go:56] duration metric: took 1m31.787174067s for fixHost
	I0906 19:02:14.487448   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.490126   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.490510   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.490541   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.490726   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.490930   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.491084   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.491223   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.491366   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:02:14.491537   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:02:14.491547   30973 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:02:14.598045   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649334.553360444
	
	I0906 19:02:14.598070   30973 fix.go:216] guest clock: 1725649334.553360444
	I0906 19:02:14.598077   30973 fix.go:229] Guest: 2024-09-06 19:02:14.553360444 +0000 UTC Remote: 2024-09-06 19:02:14.487433708 +0000 UTC m=+91.917728709 (delta=65.926736ms)
	I0906 19:02:14.598105   30973 fix.go:200] guest clock delta is within tolerance: 65.926736ms
	I0906 19:02:14.598121   30973 start.go:83] releasing machines lock for "ha-313128", held for 1m31.897881945s
	I0906 19:02:14.598147   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.598410   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:02:14.600993   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.601335   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.601359   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.601535   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602064   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602246   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602360   30973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:02:14.602395   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.602490   30973 ssh_runner.go:195] Run: cat /version.json
	I0906 19:02:14.602505   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.605042   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605172   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605395   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.605418   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605547   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.605652   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.605677   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605689   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.605801   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.605856   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.605923   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.606008   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.606047   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.606191   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.682320   30973 ssh_runner.go:195] Run: systemctl --version
	I0906 19:02:14.707871   30973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:02:14.868709   30973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:02:14.878107   30973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:02:14.878182   30973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:02:14.887795   30973 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 19:02:14.887825   30973 start.go:495] detecting cgroup driver to use...
	I0906 19:02:14.887900   30973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:02:14.905023   30973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:02:14.920380   30973 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:02:14.920478   30973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:02:14.936661   30973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:02:14.951264   30973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:02:15.102677   30973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:02:15.248271   30973 docker.go:233] disabling docker service ...
	I0906 19:02:15.248331   30973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:02:15.264423   30973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:02:15.278696   30973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:02:15.426846   30973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:02:15.574956   30973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:02:15.589843   30973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:02:15.609432   30973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 19:02:15.609504   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.620399   30973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:02:15.620463   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.630897   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.641484   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.651945   30973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:02:15.663429   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.674521   30973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.689183   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.700177   30973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:02:15.710433   30973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:02:15.720027   30973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:02:15.864474   30973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:02:16.100883   30973 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:02:16.100949   30973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:02:16.106267   30973 start.go:563] Will wait 60s for crictl version
	I0906 19:02:16.106339   30973 ssh_runner.go:195] Run: which crictl
	I0906 19:02:16.110880   30973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:02:16.149993   30973 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:02:16.150090   30973 ssh_runner.go:195] Run: crio --version
	I0906 19:02:16.181738   30973 ssh_runner.go:195] Run: crio --version
	I0906 19:02:16.215139   30973 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 19:02:16.216581   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:02:16.219061   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:16.219402   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:16.219431   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:16.219550   30973 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 19:02:16.224692   30973 kubeadm.go:883] updating cluster {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:02:16.224825   30973 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:02:16.224887   30973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:02:16.279712   30973 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:02:16.279734   30973 crio.go:433] Images already preloaded, skipping extraction
	I0906 19:02:16.279784   30973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:02:16.314787   30973 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:02:16.314818   30973 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:02:16.314830   30973 kubeadm.go:934] updating node { 192.168.39.70 8443 v1.31.0 crio true true} ...
	I0906 19:02:16.314943   30973 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:02:16.315021   30973 ssh_runner.go:195] Run: crio config
	I0906 19:02:16.364038   30973 cni.go:84] Creating CNI manager for ""
	I0906 19:02:16.364072   30973 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 19:02:16.364092   30973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:02:16.364128   30973 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-313128 NodeName:ha-313128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:02:16.364353   30973 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-313128"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:02:16.364385   30973 kube-vip.go:115] generating kube-vip config ...
	I0906 19:02:16.364438   30973 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 19:02:16.376810   30973 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 19:02:16.376947   30973 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 19:02:16.377010   30973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 19:02:16.386554   30973 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:02:16.386654   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 19:02:16.396282   30973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 19:02:16.413426   30973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:02:16.430809   30973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0906 19:02:16.447378   30973 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0906 19:02:16.464060   30973 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 19:02:16.469045   30973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:02:16.610775   30973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:02:16.625535   30973 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.70
	I0906 19:02:16.625562   30973 certs.go:194] generating shared ca certs ...
	I0906 19:02:16.625577   30973 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.625717   30973 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:02:16.625753   30973 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:02:16.625762   30973 certs.go:256] generating profile certs ...
	I0906 19:02:16.625841   30973 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 19:02:16.625866   30973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c
	I0906 19:02:16.625879   30973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.32 192.168.39.172 192.168.39.254]
	I0906 19:02:16.804798   30973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c ...
	I0906 19:02:16.804827   30973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c: {Name:mkbad82bfe626c7b530e91f2fb1afe292d0ae161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.805001   30973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c ...
	I0906 19:02:16.805015   30973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c: {Name:mk0ae7f160e2379f6800fc471c87e5a6b8b93da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.805088   30973 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 19:02:16.805220   30973 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 19:02:16.805349   30973 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 19:02:16.805363   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 19:02:16.805378   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 19:02:16.805391   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 19:02:16.805424   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 19:02:16.805440   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 19:02:16.805451   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 19:02:16.805460   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 19:02:16.805469   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 19:02:16.805512   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:02:16.805541   30973 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:02:16.805551   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:02:16.805578   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:02:16.805605   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:02:16.805628   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:02:16.805663   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:02:16.805690   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 19:02:16.805703   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:16.805716   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 19:02:16.806296   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:02:16.832409   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:02:16.856617   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:02:16.883121   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:02:16.908841   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 19:02:16.934050   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 19:02:16.957637   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:02:16.982352   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 19:02:17.007984   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:02:17.034211   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:02:17.058444   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:02:17.082266   30973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:02:17.099732   30973 ssh_runner.go:195] Run: openssl version
	I0906 19:02:17.105835   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:02:17.117417   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.122102   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.122167   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.127926   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:02:17.137341   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:02:17.147895   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.152327   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.152384   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.158147   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:02:17.167715   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:02:17.179028   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.183445   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.183521   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.189253   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:02:17.198545   30973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:02:17.203152   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 19:02:17.208885   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 19:02:17.214536   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 19:02:17.220261   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 19:02:17.226142   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 19:02:17.231663   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 19:02:17.237142   30973 kubeadm.go:392] StartCluster: {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm
-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:02:17.237264   30973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:02:17.237316   30973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:02:17.274034   30973 cri.go:89] found id: "9103596edb635c85d04deccce75e13f1cd3262538a222b30a0c94e764770d28c"
	I0906 19:02:17.274063   30973 cri.go:89] found id: "15aafcfc8e779931ee6d9a42dd1aab5a06c3de9f67ec6b3feb49305eed4103e0"
	I0906 19:02:17.274069   30973 cri.go:89] found id: "8fa4e79af67df589d61af4ab106d80e16d119e6feed8deff5827505fa804474c"
	I0906 19:02:17.274074   30973 cri.go:89] found id: "5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939"
	I0906 19:02:17.274078   30973 cri.go:89] found id: "ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d"
	I0906 19:02:17.274083   30973 cri.go:89] found id: "76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa"
	I0906 19:02:17.274087   30973 cri.go:89] found id: "76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b"
	I0906 19:02:17.274091   30973 cri.go:89] found id: "135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1"
	I0906 19:02:17.274095   30973 cri.go:89] found id: "13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d"
	I0906 19:02:17.274104   30973 cri.go:89] found id: "7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f"
	I0906 19:02:17.274108   30973 cri.go:89] found id: "9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387"
	I0906 19:02:17.274112   30973 cri.go:89] found id: "e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8"
	I0906 19:02:17.274116   30973 cri.go:89] found id: "a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f"
	I0906 19:02:17.274121   30973 cri.go:89] found id: ""
	I0906 19:02:17.274164   30973 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-313128 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-313128
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-313128 -n ha-313128
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-313128 logs -n 25: (1.875942233s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m02:/home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m04 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp testdata/cp-test.txt                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128:/home/docker/cp-test_ha-313128-m04_ha-313128.txt                       |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128 sudo cat                                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128.txt                                 |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m02:/home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03:/home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m03 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-313128 node stop m02 -v=7                                                     | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-313128 node start m02 -v=7                                                    | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-313128 -v=7                                                           | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-313128 -v=7                                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-313128 --wait=true -v=7                                                    | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 19:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-313128                                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 19:12 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 19:00:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:00:42.604662   30973 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:00:42.604922   30973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:42.604931   30973 out.go:358] Setting ErrFile to fd 2...
	I0906 19:00:42.604937   30973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:42.605118   30973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:00:42.605712   30973 out.go:352] Setting JSON to false
	I0906 19:00:42.606606   30973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2592,"bootTime":1725646651,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:00:42.606669   30973 start.go:139] virtualization: kvm guest
	I0906 19:00:42.609026   30973 out.go:177] * [ha-313128] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:00:42.610315   30973 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:00:42.610320   30973 notify.go:220] Checking for updates...
	I0906 19:00:42.612626   30973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:00:42.614046   30973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:00:42.615697   30973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:00:42.617289   30973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:00:42.618880   30973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:00:42.620642   30973 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:00:42.620737   30973 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:00:42.621181   30973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:00:42.621247   30973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:42.636849   30973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0906 19:00:42.637263   30973 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:42.637848   30973 main.go:141] libmachine: Using API Version  1
	I0906 19:00:42.637868   30973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:42.638214   30973 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:42.638435   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.676963   30973 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 19:00:42.678406   30973 start.go:297] selected driver: kvm2
	I0906 19:00:42.678423   30973 start.go:901] validating driver "kvm2" against &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default A
PIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headl
amp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:00:42.678622   30973 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:00:42.678996   30973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:00:42.679070   30973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:00:42.694855   30973 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:00:42.695667   30973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:00:42.695733   30973 cni.go:84] Creating CNI manager for ""
	I0906 19:00:42.695746   30973 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 19:00:42.695799   30973 start.go:340] cluster config:
	{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:00:42.695915   30973 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:00:42.698090   30973 out.go:177] * Starting "ha-313128" primary control-plane node in "ha-313128" cluster
	I0906 19:00:42.699706   30973 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:00:42.699746   30973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:00:42.699754   30973 cache.go:56] Caching tarball of preloaded images
	I0906 19:00:42.699837   30973 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:00:42.699848   30973 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:00:42.699961   30973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 19:00:42.700160   30973 start.go:360] acquireMachinesLock for ha-313128: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:00:42.700217   30973 start.go:364] duration metric: took 31.95µs to acquireMachinesLock for "ha-313128"
	I0906 19:00:42.700243   30973 start.go:96] Skipping create...Using existing machine configuration
	I0906 19:00:42.700253   30973 fix.go:54] fixHost starting: 
	I0906 19:00:42.700615   30973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:00:42.700669   30973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:42.715246   30973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0906 19:00:42.715721   30973 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:42.716296   30973 main.go:141] libmachine: Using API Version  1
	I0906 19:00:42.716319   30973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:42.716656   30973 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:42.716872   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.717048   30973 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 19:00:42.718801   30973 fix.go:112] recreateIfNeeded on ha-313128: state=Running err=<nil>
	W0906 19:00:42.718818   30973 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 19:00:42.722091   30973 out.go:177] * Updating the running kvm2 "ha-313128" VM ...
	I0906 19:00:42.723320   30973 machine.go:93] provisionDockerMachine start ...
	I0906 19:00:42.723341   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.723593   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.726581   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.727062   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.727086   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.727274   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.727450   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.727600   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.727717   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.727841   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.728035   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.728049   30973 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 19:00:42.842622   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 19:00:42.842652   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:42.842912   30973 buildroot.go:166] provisioning hostname "ha-313128"
	I0906 19:00:42.842943   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:42.843128   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.845900   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.846338   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.846367   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.846533   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.846705   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.846862   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.846998   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.847138   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.847339   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.847355   30973 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128 && echo "ha-313128" | sudo tee /etc/hostname
	I0906 19:00:42.971699   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 19:00:42.971726   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.974199   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.974577   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.974616   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.974777   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.974955   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.975110   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.975250   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.975389   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.975547   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.975561   30973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:00:43.086298   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:00:43.086336   30973 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:00:43.086383   30973 buildroot.go:174] setting up certificates
	I0906 19:00:43.086397   30973 provision.go:84] configureAuth start
	I0906 19:00:43.086411   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:43.086768   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:00:43.089761   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.090172   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.090221   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.090371   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.092707   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.093131   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.093150   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.093281   30973 provision.go:143] copyHostCerts
	I0906 19:00:43.093308   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:00:43.093346   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:00:43.093371   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:00:43.093449   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:00:43.093549   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:00:43.093574   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:00:43.093581   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:00:43.093618   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:00:43.093687   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:00:43.093709   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:00:43.093714   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:00:43.093750   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:00:43.093833   30973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128 san=[127.0.0.1 192.168.39.70 ha-313128 localhost minikube]
	I0906 19:00:43.258285   30973 provision.go:177] copyRemoteCerts
	I0906 19:00:43.258366   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:00:43.258394   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.260947   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.261383   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.261412   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.261600   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:43.261791   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.261926   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:43.262075   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:00:43.348224   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 19:00:43.348285   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:00:43.374716   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 19:00:43.374792   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0906 19:00:43.403028   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 19:00:43.403095   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 19:00:43.428263   30973 provision.go:87] duration metric: took 341.855389ms to configureAuth
	I0906 19:00:43.428293   30973 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:00:43.428524   30973 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:00:43.428598   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.431629   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.432063   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.432090   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.432269   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:43.432477   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.432645   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.432802   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:43.432969   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:43.433127   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:43.433144   30973 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:02:14.266261   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:02:14.266292   30973 machine.go:96] duration metric: took 1m31.542957549s to provisionDockerMachine
	I0906 19:02:14.266304   30973 start.go:293] postStartSetup for "ha-313128" (driver="kvm2")
	I0906 19:02:14.266315   30973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:02:14.266329   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.266669   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:02:14.266694   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.270021   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.270486   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.270511   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.270640   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.270873   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.271053   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.271182   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.357410   30973 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:02:14.362343   30973 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:02:14.362367   30973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:02:14.362428   30973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:02:14.362506   30973 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:02:14.362518   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 19:02:14.362611   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:02:14.372770   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:02:14.400357   30973 start.go:296] duration metric: took 134.040576ms for postStartSetup
	I0906 19:02:14.400419   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.400730   30973 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 19:02:14.400755   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.403411   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.403817   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.403842   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.403988   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.404164   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.404325   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.404472   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	W0906 19:02:14.487375   30973 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0906 19:02:14.487427   30973 fix.go:56] duration metric: took 1m31.787174067s for fixHost
	I0906 19:02:14.487448   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.490126   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.490510   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.490541   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.490726   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.490930   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.491084   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.491223   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.491366   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:02:14.491537   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:02:14.491547   30973 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:02:14.598045   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649334.553360444
	
	I0906 19:02:14.598070   30973 fix.go:216] guest clock: 1725649334.553360444
	I0906 19:02:14.598077   30973 fix.go:229] Guest: 2024-09-06 19:02:14.553360444 +0000 UTC Remote: 2024-09-06 19:02:14.487433708 +0000 UTC m=+91.917728709 (delta=65.926736ms)
	I0906 19:02:14.598105   30973 fix.go:200] guest clock delta is within tolerance: 65.926736ms
	I0906 19:02:14.598121   30973 start.go:83] releasing machines lock for "ha-313128", held for 1m31.897881945s
	I0906 19:02:14.598147   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.598410   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:02:14.600993   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.601335   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.601359   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.601535   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602064   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602246   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602360   30973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:02:14.602395   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.602490   30973 ssh_runner.go:195] Run: cat /version.json
	I0906 19:02:14.602505   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.605042   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605172   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605395   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.605418   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605547   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.605652   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.605677   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605689   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.605801   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.605856   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.605923   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.606008   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.606047   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.606191   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.682320   30973 ssh_runner.go:195] Run: systemctl --version
	I0906 19:02:14.707871   30973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:02:14.868709   30973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:02:14.878107   30973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:02:14.878182   30973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:02:14.887795   30973 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 19:02:14.887825   30973 start.go:495] detecting cgroup driver to use...
	I0906 19:02:14.887900   30973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:02:14.905023   30973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:02:14.920380   30973 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:02:14.920478   30973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:02:14.936661   30973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:02:14.951264   30973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:02:15.102677   30973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:02:15.248271   30973 docker.go:233] disabling docker service ...
	I0906 19:02:15.248331   30973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:02:15.264423   30973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:02:15.278696   30973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:02:15.426846   30973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:02:15.574956   30973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:02:15.589843   30973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:02:15.609432   30973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 19:02:15.609504   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.620399   30973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:02:15.620463   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.630897   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.641484   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.651945   30973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:02:15.663429   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.674521   30973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.689183   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.700177   30973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:02:15.710433   30973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:02:15.720027   30973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:02:15.864474   30973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:02:16.100883   30973 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:02:16.100949   30973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:02:16.106267   30973 start.go:563] Will wait 60s for crictl version
	I0906 19:02:16.106339   30973 ssh_runner.go:195] Run: which crictl
	I0906 19:02:16.110880   30973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:02:16.149993   30973 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:02:16.150090   30973 ssh_runner.go:195] Run: crio --version
	I0906 19:02:16.181738   30973 ssh_runner.go:195] Run: crio --version
	I0906 19:02:16.215139   30973 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 19:02:16.216581   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:02:16.219061   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:16.219402   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:16.219431   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:16.219550   30973 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 19:02:16.224692   30973 kubeadm.go:883] updating cluster {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:02:16.224825   30973 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:02:16.224887   30973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:02:16.279712   30973 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:02:16.279734   30973 crio.go:433] Images already preloaded, skipping extraction
	I0906 19:02:16.279784   30973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:02:16.314787   30973 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:02:16.314818   30973 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:02:16.314830   30973 kubeadm.go:934] updating node { 192.168.39.70 8443 v1.31.0 crio true true} ...
	I0906 19:02:16.314943   30973 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:02:16.315021   30973 ssh_runner.go:195] Run: crio config
	I0906 19:02:16.364038   30973 cni.go:84] Creating CNI manager for ""
	I0906 19:02:16.364072   30973 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 19:02:16.364092   30973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:02:16.364128   30973 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-313128 NodeName:ha-313128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:02:16.364353   30973 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-313128"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:02:16.364385   30973 kube-vip.go:115] generating kube-vip config ...
	I0906 19:02:16.364438   30973 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 19:02:16.376810   30973 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 19:02:16.376947   30973 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 19:02:16.377010   30973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 19:02:16.386554   30973 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:02:16.386654   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 19:02:16.396282   30973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 19:02:16.413426   30973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:02:16.430809   30973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0906 19:02:16.447378   30973 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0906 19:02:16.464060   30973 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 19:02:16.469045   30973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:02:16.610775   30973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:02:16.625535   30973 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.70
	I0906 19:02:16.625562   30973 certs.go:194] generating shared ca certs ...
	I0906 19:02:16.625577   30973 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.625717   30973 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:02:16.625753   30973 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:02:16.625762   30973 certs.go:256] generating profile certs ...
	I0906 19:02:16.625841   30973 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 19:02:16.625866   30973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c
	I0906 19:02:16.625879   30973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.32 192.168.39.172 192.168.39.254]
	I0906 19:02:16.804798   30973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c ...
	I0906 19:02:16.804827   30973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c: {Name:mkbad82bfe626c7b530e91f2fb1afe292d0ae161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.805001   30973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c ...
	I0906 19:02:16.805015   30973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c: {Name:mk0ae7f160e2379f6800fc471c87e5a6b8b93da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.805088   30973 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 19:02:16.805220   30973 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 19:02:16.805349   30973 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 19:02:16.805363   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 19:02:16.805378   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 19:02:16.805391   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 19:02:16.805424   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 19:02:16.805440   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 19:02:16.805451   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 19:02:16.805460   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 19:02:16.805469   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 19:02:16.805512   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:02:16.805541   30973 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:02:16.805551   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:02:16.805578   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:02:16.805605   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:02:16.805628   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:02:16.805663   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:02:16.805690   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 19:02:16.805703   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:16.805716   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 19:02:16.806296   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:02:16.832409   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:02:16.856617   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:02:16.883121   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:02:16.908841   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 19:02:16.934050   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 19:02:16.957637   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:02:16.982352   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 19:02:17.007984   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:02:17.034211   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:02:17.058444   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:02:17.082266   30973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:02:17.099732   30973 ssh_runner.go:195] Run: openssl version
	I0906 19:02:17.105835   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:02:17.117417   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.122102   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.122167   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.127926   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:02:17.137341   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:02:17.147895   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.152327   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.152384   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.158147   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:02:17.167715   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:02:17.179028   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.183445   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.183521   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.189253   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:02:17.198545   30973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:02:17.203152   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 19:02:17.208885   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 19:02:17.214536   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 19:02:17.220261   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 19:02:17.226142   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 19:02:17.231663   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 19:02:17.237142   30973 kubeadm.go:392] StartCluster: {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm
-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:02:17.237264   30973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:02:17.237316   30973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:02:17.274034   30973 cri.go:89] found id: "9103596edb635c85d04deccce75e13f1cd3262538a222b30a0c94e764770d28c"
	I0906 19:02:17.274063   30973 cri.go:89] found id: "15aafcfc8e779931ee6d9a42dd1aab5a06c3de9f67ec6b3feb49305eed4103e0"
	I0906 19:02:17.274069   30973 cri.go:89] found id: "8fa4e79af67df589d61af4ab106d80e16d119e6feed8deff5827505fa804474c"
	I0906 19:02:17.274074   30973 cri.go:89] found id: "5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939"
	I0906 19:02:17.274078   30973 cri.go:89] found id: "ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d"
	I0906 19:02:17.274083   30973 cri.go:89] found id: "76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa"
	I0906 19:02:17.274087   30973 cri.go:89] found id: "76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b"
	I0906 19:02:17.274091   30973 cri.go:89] found id: "135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1"
	I0906 19:02:17.274095   30973 cri.go:89] found id: "13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d"
	I0906 19:02:17.274104   30973 cri.go:89] found id: "7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f"
	I0906 19:02:17.274108   30973 cri.go:89] found id: "9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387"
	I0906 19:02:17.274112   30973 cri.go:89] found id: "e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8"
	I0906 19:02:17.274116   30973 cri.go:89] found id: "a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f"
	I0906 19:02:17.274121   30973 cri.go:89] found id: ""
	I0906 19:02:17.274164   30973 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.743058214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8a3c4f8-1d7d-4f75-aa80-39e5fbcd7b36 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.751321389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2a45d0d-c4fb-401a-92f5-35e4eb0fd7ed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.752024393Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649932751990721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2a45d0d-c4fb-401a-92f5-35e4eb0fd7ed name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.753527358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7aa1fb1-d887-42da-984b-dbe4f17210bc name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.753617475Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7aa1fb1-d887-42da-984b-dbe4f17210bc name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.754190419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725649381484380479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5797d04311d37274a0a42cb3eebc2559a195c1202cabadd8d4b2208bf93cc186,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649370481993651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117
b7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85
ebe5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725649343291112866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7aa1fb1-d887-42da-984b-dbe4f17210bc name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.812407190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3780018-6043-45ac-88f3-fdc47b886063 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.812587977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3780018-6043-45ac-88f3-fdc47b886063 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.813991118Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8393d42-aaf1-4fd4-a113-700436017454 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.814444975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649932814419441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8393d42-aaf1-4fd4-a113-700436017454 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.815142944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdbe40c2-1966-4ecc-af86-938b350eb7d6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.815200184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdbe40c2-1966-4ecc-af86-938b350eb7d6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.815912075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725649381484380479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5797d04311d37274a0a42cb3eebc2559a195c1202cabadd8d4b2208bf93cc186,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649370481993651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117
b7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85
ebe5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725649343291112866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bdbe40c2-1966-4ecc-af86-938b350eb7d6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.864538659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f21de2cd-2390-41c6-923e-38c7be4fb764 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.864678731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f21de2cd-2390-41c6-923e-38c7be4fb764 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.866065481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee397ae9-19d6-41d7-a627-41fecec5aafc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.866566784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649932866536578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee397ae9-19d6-41d7-a627-41fecec5aafc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.867291101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da5c6e15-3066-4176-80c1-7675167143b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.867460566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da5c6e15-3066-4176-80c1-7675167143b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.868111620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725649381484380479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5797d04311d37274a0a42cb3eebc2559a195c1202cabadd8d4b2208bf93cc186,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649370481993651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117
b7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85
ebe5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725649343291112866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da5c6e15-3066-4176-80c1-7675167143b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.893143080Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f94c8943-bc96-4360-b2a0-46ae8f8b91f8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.893592030Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-s2cgz,Uid:ea1b3998-c924-47a2-a321-bd8f20ed324e,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649376621609740,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:54:05.088814193Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-313128,Uid:f6d46474fdf3e5977e60eb17ada4e349,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1725649356168399514,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{kubernetes.io/config.hash: f6d46474fdf3e5977e60eb17ada4e349,kubernetes.io/config.seen: 2024-09-06T19:02:16.420886713Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-gccvh,Uid:9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649343007117605,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-06T18:51:43.928140026Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&PodSandboxMetadata{Name:etcd-ha-313128,Uid:9cddf482287bf3b2dbb1236f43dc96c3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342953129237,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.70:2379,kubernetes.io/config.hash: 9cddf482287bf3b2dbb1236f43dc96c3,kubernetes.io/config.seen: 2024-09-06T18:51:25.375047261Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-313128,Uid:5971d16b859a22cc0a378921d7577d4a,Namespace
:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342939628598,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5971d16b859a22cc0a378921d7577d4a,kubernetes.io/config.seen: 2024-09-06T18:51:25.375053288Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&PodSandboxMetadata{Name:kindnet-h2trt,Uid:90af3550-1fae-46bd-9329-f185fcdb23c6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342931311678,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fc
db23c6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:29.831601797Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-313128,Uid:1f52c5565007a9e3852323973b3197bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342880954362,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1f52c5565007a9e3852323973b3197bc,kubernetes.io/config.seen: 2024-09-06T18:51:25.375052130Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d59
60,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6c957eac-7904-4c39-b858-bfb7da32c75c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342875853842,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/t
mp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-06T18:51:43.943423552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-313128,Uid:19f5824a415bb48f2bb6ab3144efbec6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342869057108,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.70:8443,kubernetes.io/config.hash: 19f5824a415bb48f2bb6ab3144efbec6,kubernetes.io/config.seen: 2024-09-06T1
8:51:25.375050957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&PodSandboxMetadata{Name:kube-proxy-h5xn7,Uid:e45358c5-398e-4d33-9bd0-a4f28ce17ac9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342854024488,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:29.825007552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-gk28z,Uid:ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649337900654122,Lab
els:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:43.938411060Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-s2cgz,Uid:ea1b3998-c924-47a2-a321-bd8f20ed324e,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648845415388093,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:54:05.088814193Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-gk28z,Uid:ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648704255562292,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:43.938411060Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-gccvh,Uid:9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648704235684853,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:43.928140026Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&PodSandboxMetadata{Name:kube-proxy-h5xn7,Uid:e45358c5-398e-4d33-9bd0-a4f28ce17ac9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648690148016126,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:29.825007552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Po
dSandbox{Id:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&PodSandboxMetadata{Name:kindnet-h2trt,Uid:90af3550-1fae-46bd-9329-f185fcdb23c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648690143296092,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:29.831601797Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-313128,Uid:5971d16b859a22cc0a378921d7577d4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648678770402611,Labels:map[string]string{component: kube-scheduler,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5971d16b859a22cc0a378921d7577d4a,kubernetes.io/config.seen: 2024-09-06T18:51:18.311933771Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&PodSandboxMetadata{Name:etcd-ha-313128,Uid:9cddf482287bf3b2dbb1236f43dc96c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648678755469606,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.70:2379,kubernetes.io/config.hash: 9cddf482287
bf3b2dbb1236f43dc96c3,kubernetes.io/config.seen: 2024-09-06T18:51:18.311927690Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f94c8943-bc96-4360-b2a0-46ae8f8b91f8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.894562637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43cc04b7-f223-41c5-b3a3-560f5aa13b2e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.895090670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43cc04b7-f223-41c5-b3a3-560f5aa13b2e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:12 ha-313128 crio[3609]: time="2024-09-06 19:12:12.896514866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725649381484380479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5797d04311d37274a0a42cb3eebc2559a195c1202cabadd8d4b2208bf93cc186,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649370481993651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117
b7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85
ebe5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725649343291112866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43cc04b7-f223-41c5-b3a3-560f5aa13b2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	563331b1df56b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       4                   a0a256d64c27f       storage-provisioner
	1cfd32c774caf       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      9 minutes ago       Running             kube-controller-manager   2                   4c7e7fc7137a0       kube-controller-manager-ha-313128
	8ef80321a5967       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      9 minutes ago       Running             kube-apiserver            3                   3ae5e99906a2e       kube-apiserver-ha-313128
	9d3f5b10c63ca       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      9 minutes ago       Running             busybox                   1                   7cbf701e90a6f       busybox-7dff88458-s2cgz
	5797d04311d37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       3                   a0a256d64c27f       storage-provisioner
	a170bb1c8a3cb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      9 minutes ago       Running             kube-vip                  0                   9a8c2a564ace3       kube-vip-ha-313128
	d3e14bee704aa       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      9 minutes ago       Running             kindnet-cni               1                   419150e9a53e3       kindnet-h2trt
	bea01e33385d8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      9 minutes ago       Running             kube-scheduler            1                   64b8d66092688       kube-scheduler-ha-313128
	36d954de08dab       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      9 minutes ago       Running             etcd                      1                   54824bb3087ee       etcd-ha-313128
	7fab375b2e00c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      9 minutes ago       Exited              kube-controller-manager   1                   4c7e7fc7137a0       kube-controller-manager-ha-313128
	25ee04d39c4c9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   1                   d481cfc1806b6       coredns-6f6b679f8f-gccvh
	57ce31a9f3420       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      9 minutes ago       Exited              kube-apiserver            2                   3ae5e99906a2e       kube-apiserver-ha-313128
	77c80de1adc0a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      9 minutes ago       Running             kube-proxy                1                   e453276f34782       kube-proxy-h5xn7
	f78069cd2a935       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   1                   7356e11979968       coredns-6f6b679f8f-gk28z
	7b3f2cd2f6c9c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   18 minutes ago      Exited              busybox                   0                   74b84ec8f17a7       busybox-7dff88458-s2cgz
	5b950806bc4b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Exited              coredns                   0                   9151daea570f3       coredns-6f6b679f8f-gk28z
	76bbd732b8695       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Exited              coredns                   0                   8449d8c8bfa3e       coredns-6f6b679f8f-gccvh
	76ca94f153009       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    20 minutes ago      Exited              kindnet-cni               0                   a3128d8e090be       kindnet-h2trt
	135074e446370       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      20 minutes ago      Exited              kube-proxy                0                   dde7791c0770a       kube-proxy-h5xn7
	e32b22b9f83ac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Exited              etcd                      0                   0ced27e2ded46       etcd-ha-313128
	a406aeec43303       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      20 minutes ago      Exited              kube-scheduler            0                   aeb85ed29ab1d       kube-scheduler-ha-313128
	
	
	==> coredns [25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85ebe5629edada2adae88766] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1569945278]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:02:25.248) (total time: 10000ms):
	Trace[1569945278]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:02:35.249)
	Trace[1569945278]: [10.000824892s] [10.000824892s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939] <==
	[INFO] 10.244.0.4:42561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009569s
	[INFO] 10.244.0.4:55114 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086084s
	[INFO] 10.244.0.4:53953 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067022s
	[INFO] 10.244.1.2:48594 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121564s
	[INFO] 10.244.1.2:53114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166914s
	[INFO] 10.244.2.2:34659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158468s
	[INFO] 10.244.2.2:34171 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176512s
	[INFO] 10.244.0.4:58990 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009694s
	[INFO] 10.244.0.4:43562 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118003s
	[INFO] 10.244.0.4:33609 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086781s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1840&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1840&timeout=6m57s&timeoutSeconds=417&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[442499001]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.006) (total time: 13786ms):
	Trace[442499001]: ---"Objects listed" error:Unauthorized 13786ms (19:00:41.792)
	Trace[442499001]: [13.786231314s] [13.786231314s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[85447720]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.308) (total time: 13485ms):
	Trace[85447720]: ---"Objects listed" error:Unauthorized 13484ms (19:00:41.792)
	Trace[85447720]: [13.485399749s] [13.485399749s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa] <==
	[INFO] 10.244.1.2:35244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089298s
	[INFO] 10.244.1.2:54461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083864s
	[INFO] 10.244.2.2:46046 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126212s
	[INFO] 10.244.2.2:45762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078805s
	[INFO] 10.244.0.4:56166 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109081s
	[INFO] 10.244.1.2:44485 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175559s
	[INFO] 10.244.1.2:60331 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113433s
	[INFO] 10.244.2.2:33944 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094759s
	[INFO] 10.244.2.2:54249 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007626s
	[INFO] 10.244.0.4:34049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091783s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1840&timeout=6m52s&timeoutSeconds=412&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1840&timeout=9m31s&timeoutSeconds=571&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1362283421]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.166) (total time: 13625ms):
	Trace[1362283421]: ---"Objects listed" error:Unauthorized 13625ms (19:00:41.791)
	Trace[1362283421]: [13.625497855s] [13.625497855s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2000776186]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.018) (total time: 13773ms):
	Trace[2000776186]: ---"Objects listed" error:Unauthorized 13773ms (19:00:41.792)
	Trace[2000776186]: [13.773675488s] [13.773675488s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1498074878]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:02:28.897) (total time: 10001ms):
	Trace[1498074878]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:02:38.898)
	Trace[1498074878]: [10.001684214s] [10.001684214s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:54584->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:54584->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-313128
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_51_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:12:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:08:18 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:08:18 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:08:18 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:08:18 +0000   Fri, 06 Sep 2024 18:51:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-313128
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a8374058d8a4ce69ddf9d9b9a6bab88
	  System UUID:                5a837405-8d8a-4ce6-9ddf-9d9b9a6bab88
	  Boot ID:                    4ac8491f-e614-44c2-96e0-f1733bbe0f17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s2cgz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-6f6b679f8f-gccvh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 coredns-6f6b679f8f-gk28z             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-ha-313128                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kindnet-h2trt                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-apiserver-ha-313128             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-313128    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-h5xn7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-313128             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-313128                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 9m8s               kube-proxy       
	  Normal   NodeHasSufficientPID     20m                kubelet          Node ha-313128 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node ha-313128 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node ha-313128 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           20m                node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal   NodeReady                20m                kubelet          Node ha-313128 status is now: NodeReady
	  Normal   RegisteredNode           19m                node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Warning  ContainerGCFailed        10m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             10m (x3 over 11m)  kubelet          Node ha-313128 status is now: NodeNotReady
	  Normal   RegisteredNode           9m13s              node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal   RegisteredNode           9m6s               node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	
	
	Name:               ha-313128-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_52_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:52:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:12:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:08:53 +0000   Fri, 06 Sep 2024 19:03:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:08:53 +0000   Fri, 06 Sep 2024 19:03:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:08:53 +0000   Fri, 06 Sep 2024 19:03:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:08:53 +0000   Fri, 06 Sep 2024 19:03:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    ha-313128-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9324a423f4b54997b7d3837f23afbaaf
	  System UUID:                9324a423-f4b5-4997-b7d3-837f23afbaaf
	  Boot ID:                    b4095a80-7da5-4719-8b2f-897a61c535e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-54m66                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-313128-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-t65ls                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-313128-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-313128-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-xjp6p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-313128-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-313128-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 8m50s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                    node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-313128-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-313128-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet          Node ha-313128-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                    node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  NodeNotReady             16m                    node-controller  Node ha-313128-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  9m33s (x8 over 9m33s)  kubelet          Node ha-313128-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    9m33s (x8 over 9m33s)  kubelet          Node ha-313128-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m33s (x7 over 9m33s)  kubelet          Node ha-313128-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m13s                  node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  RegisteredNode           9m6s                   node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	
	
	Name:               ha-313128-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_53_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:53:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:12:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:09:23 +0000   Fri, 06 Sep 2024 19:03:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:09:23 +0000   Fri, 06 Sep 2024 19:03:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:09:23 +0000   Fri, 06 Sep 2024 19:03:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:09:23 +0000   Fri, 06 Sep 2024 19:03:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    ha-313128-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d33107b982c427ca47333d2971ade3a
	  System UUID:                1d33107b-982c-427c-a473-33d2971ade3a
	  Boot ID:                    116048de-2b5b-45d0-9564-85dc9ea57043
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-k99v6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-313128-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-jl257                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-313128-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-313128-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-gfjr7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-313128-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-313128-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 18m                    kube-proxy       
	  Normal   Starting                 8m8s                   kube-proxy       
	  Normal   RegisteredNode           18m                    node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-313128-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-313128-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-313128-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                    node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	  Normal   RegisteredNode           18m                    node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	  Normal   RegisteredNode           9m13s                  node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	  Normal   RegisteredNode           9m6s                   node-controller  Node ha-313128-m03 event: Registered Node ha-313128-m03 in Controller
	  Normal   NodeNotReady             8m33s                  node-controller  Node ha-313128-m03 status is now: NodeNotReady
	  Normal   Starting                 8m26s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m26s (x2 over 8m26s)  kubelet          Node ha-313128-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m26s (x2 over 8m26s)  kubelet          Node ha-313128-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m26s (x2 over 8m26s)  kubelet          Node ha-313128-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8m26s                  kubelet          Node ha-313128-m03 has been rebooted, boot id: 116048de-2b5b-45d0-9564-85dc9ea57043
	  Normal   NodeReady                8m26s                  kubelet          Node ha-313128-m03 status is now: NodeReady
	
	
	Name:               ha-313128-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_54_39_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:54:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:58:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 19:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 19:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 19:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 19:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-313128-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1284faaf1604a6db25bba3bb7ed5953
	  System UUID:                f1284faa-f160-4a6d-b25b-ba3bb7ed5953
	  Boot ID:                    25844c67-e2f9-444b-99b9-94b7e385f59f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fsbs9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-proxy-8tm7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node ha-313128-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node ha-313128-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node ha-313128-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-313128-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m13s              node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           9m6s               node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  NodeNotReady             8m33s              node-controller  Node ha-313128-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 18:51] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072122] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.201564] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.131661] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.284243] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.067260] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.541515] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.060417] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251462] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.088029] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.073110] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.070796] kauditd_printk_skb: 38 callbacks suppressed
	[Sep 6 18:52] kauditd_printk_skb: 24 callbacks suppressed
	[Sep 6 19:02] systemd-fstab-generator[3534]: Ignoring "noauto" option for root device
	[  +0.149812] systemd-fstab-generator[3546]: Ignoring "noauto" option for root device
	[  +0.178231] systemd-fstab-generator[3560]: Ignoring "noauto" option for root device
	[  +0.144887] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +0.283505] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +0.753951] systemd-fstab-generator[3694]: Ignoring "noauto" option for root device
	[  +6.401831] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.203445] kauditd_printk_skb: 87 callbacks suppressed
	[Sep 6 19:03] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a] <==
	{"level":"warn","ts":"2024-09-06T19:03:41.672327Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T19:03:41.749199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T19:03:41.772266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T19:03:41.861182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T19:03:41.872326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T19:03:41.972676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T19:03:42.072195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T19:03:42.135101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"d9e0442f914d2c09","from":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-06T19:03:42.144665Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.172:2380/version","remote-member-id":"63c578731edaad90","error":"Get \"https://192.168.39.172:2380/version\": dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:03:42.144906Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"63c578731edaad90","error":"Get \"https://192.168.39.172:2380/version\": dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:03:44.540736Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"63c578731edaad90","rtt":"0s","error":"dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:03:44.540775Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"63c578731edaad90","rtt":"0s","error":"dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:03:46.147594Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.172:2380/version","remote-member-id":"63c578731edaad90","error":"Get \"https://192.168.39.172:2380/version\": dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:03:46.147676Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"63c578731edaad90","error":"Get \"https://192.168.39.172:2380/version\": dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:03:49.541198Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"63c578731edaad90","rtt":"0s","error":"dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:03:49.541227Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"63c578731edaad90","rtt":"0s","error":"dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:03:50.150016Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.172:2380/version","remote-member-id":"63c578731edaad90","error":"Get \"https://192.168.39.172:2380/version\": dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-06T19:03:50.150160Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"63c578731edaad90","error":"Get \"https://192.168.39.172:2380/version\": dial tcp 192.168.39.172:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-06T19:03:51.995134Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:03:51.996038Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:03:52.002402Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:03:52.015734Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d9e0442f914d2c09","to":"63c578731edaad90","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-06T19:03:52.015842Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:03:52.027875Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d9e0442f914d2c09","to":"63c578731edaad90","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-06T19:03:52.028748Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	
	
	==> etcd [e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8] <==
	2024/09/06 19:00:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/06 19:00:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:00:43.692458Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.70:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:00:43.692566Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.70:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:00:43.692746Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"d9e0442f914d2c09","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-06T19:00:43.692938Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.692970Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693066Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693169Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693208Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693273Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693377Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693398Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693561Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693580Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693692Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693802Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693879Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693927Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:00:43.697978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.908057312s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-06T19:00:43.698055Z","caller":"traceutil/trace.go:171","msg":"trace[1664952415] range","detail":"{range_begin:; range_end:; }","duration":"1.908148083s","start":"2024-09-06T19:00:41.789897Z","end":"2024-09-06T19:00:43.698045Z","steps":["trace[1664952415] 'agreement among raft nodes before linearized reading'  (duration: 1.908055362s)"],"step_count":1}
	{"level":"error","ts":"2024-09-06T19:00:43.698121Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-06T19:00:43.697906Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-09-06T19:00:43.698981Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-09-06T19:00:43.699230Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-313128","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.70:2380"],"advertise-client-urls":["https://192.168.39.70:2379"]}
	
	
	==> kernel <==
	 19:12:13 up 21 min,  0 users,  load average: 0.22, 0.22, 0.19
	Linux ha-313128 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b] <==
	I0906 19:00:13.769733       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:00:13.769887       1 main.go:299] handling current node
	I0906 19:00:13.770039       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:00:13.770068       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:00:13.770242       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:00:13.770325       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:00:13.770653       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:00:13.770688       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:00:23.769408       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:00:23.769600       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:00:23.769804       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:00:23.769858       1 main.go:299] handling current node
	I0906 19:00:23.769886       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:00:23.769939       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:00:23.770105       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:00:23.770150       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	E0906 19:00:26.750051       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1815&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0906 19:00:33.769131       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:00:33.769253       1 main.go:299] handling current node
	I0906 19:00:33.769290       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:00:33.769309       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:00:33.769557       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:00:33.769593       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:00:33.769784       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:00:33.769821       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0] <==
	I0906 19:11:34.880188       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:11:44.882119       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:11:44.882271       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:11:44.882581       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:11:44.882653       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:11:44.882768       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:11:44.882803       1 main.go:299] handling current node
	I0906 19:11:44.882835       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:11:44.882858       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:11:54.880709       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:11:54.880751       1 main.go:299] handling current node
	I0906 19:11:54.880768       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:11:54.880788       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:11:54.880963       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:11:54.880995       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:11:54.881077       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:11:54.881109       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:12:04.880583       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:12:04.880697       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:12:04.880835       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:12:04.880867       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:12:04.880949       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:12:04.880980       1 main.go:299] handling current node
	I0906 19:12:04.881002       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:12:04.881018       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f] <==
	I0906 19:02:24.035108       1 options.go:228] external host was not specified, using 192.168.39.70
	I0906 19:02:24.043154       1 server.go:142] Version: v1.31.0
	I0906 19:02:24.043215       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:02:24.501757       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:02:24.505574       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:02:24.510198       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:02:24.510268       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:02:24.510576       1 instance.go:232] Using reconciler: lease
	W0906 19:02:44.499795       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:44.499795       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:44.511859       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0906 19:02:44.511884       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40] <==
	I0906 19:03:03.588901       1 aggregator.go:169] waiting for initial CRD sync...
	I0906 19:03:03.588911       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0906 19:03:03.688787       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:03:03.690694       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:03:03.690782       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0906 19:03:03.690838       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:03:03.695425       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:03:03.695465       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:03:03.695522       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:03:03.695528       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:03:03.695533       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:03:03.697257       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0906 19:03:03.700575       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.172 192.168.39.32]
	I0906 19:03:03.712722       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:03:03.712957       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:03:03.712998       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:03:03.716307       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:03:03.722515       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:03:03.722569       1 policy_source.go:224] refreshing policies
	I0906 19:03:03.779663       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:03:03.802361       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:03:03.811625       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0906 19:03:03.814925       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0906 19:03:04.604439       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:03:05.231781       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.172 192.168.39.32 192.168.39.70]
	
	
	==> kube-controller-manager [1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced] <==
	I0906 19:03:24.272999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="20.050781ms"
	I0906 19:03:24.273594       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="221.7µs"
	I0906 19:03:37.556373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.735231ms"
	I0906 19:03:37.556682       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="107.138µs"
	I0906 19:03:40.707852       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-313128-m04"
	I0906 19:03:40.707874       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:03:40.715990       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 19:03:40.735689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:03:40.744053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 19:03:40.995747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.461652ms"
	I0906 19:03:40.995845       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.352µs"
	I0906 19:03:42.138693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:03:46.066221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 19:03:47.317715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:03:47.336868       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:03:48.130331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.879µs"
	I0906 19:03:48.603204       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 19:03:50.984555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:03:52.219590       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 19:04:07.524796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.759175ms"
	I0906 19:04:07.527288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.501µs"
	I0906 19:04:17.628091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:08:18.837233       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128"
	I0906 19:08:53.572898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 19:09:23.576952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	
	
	==> kube-controller-manager [7fab375b2e00c6c1c477e49d20575c282cf15631db08117b7cbd6669002057a7] <==
	I0906 19:02:25.178571       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:02:25.751144       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:02:25.751189       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:02:25.753093       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:02:25.753241       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:02:25.753740       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:02:25.753823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:02:45.757127       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.70:8443/healthz\": dial tcp 192.168.39.70:8443: connect: connection refused"
	
	
	==> kube-proxy [135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1] <==
	E0906 18:59:34.909836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:37.981970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:37.982117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:37.982239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:37.982324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:37.982672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:37.982810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:44.125613       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:44.125832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:44.125977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:44.126031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:47.196966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:47.197099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:53.343030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:53.343098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:53.343233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:53.343270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:02.557917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:02.558078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:11.774788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:11.774869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:20.989983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:20.990159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:24.061791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:24.062012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:02:26.941011       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0906 19:02:30.014890       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0906 19:02:33.085953       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0906 19:02:39.230311       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0906 19:02:48.446328       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0906 19:03:04.718998       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.70"]
	E0906 19:03:04.719127       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:03:04.788646       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:03:04.788721       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:03:04.790601       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:03:04.795556       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:03:04.800789       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:03:04.800820       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:03:04.804328       1 config.go:197] "Starting service config controller"
	I0906 19:03:04.804405       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:03:04.804517       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:03:04.804524       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:03:04.807066       1 config.go:326] "Starting node config controller"
	I0906 19:03:04.807107       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:03:04.905154       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:03:04.905252       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:03:04.907240       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f] <==
	E0906 18:54:39.143315       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8tm7b\": pod kube-proxy-8tm7b is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-8tm7b"
	I0906 18:54:39.143372       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8tm7b" node="ha-313128-m04"
	E0906 18:54:39.143180       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.144192       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fdc10711-7099-424e-885e-65589f5642e5(kube-system/kindnet-k9szn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k9szn"
	E0906 18:54:39.144252       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" pod="kube-system/kindnet-k9szn"
	I0906 18:54:39.144297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.236601       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	E0906 18:54:39.236925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-rnm78"
	I0906 18:54:39.240895       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	E0906 19:00:32.447945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0906 19:00:34.548228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:34.780196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0906 19:00:34.781245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0906 19:00:34.940096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:36.433926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0906 19:00:36.554090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0906 19:00:38.972401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0906 19:00:38.989303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:40.451678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:40.700326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0906 19:00:41.073589       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0906 19:00:41.963164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0906 19:00:42.456392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	I0906 19:00:43.531312       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0906 19:00:43.532028       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9] <==
	W0906 19:02:53.921617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.70:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:53.921659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.70:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:54.249952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.70:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:54.250053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.70:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:54.365726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.70:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:54.365850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.70:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:54.992268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.70:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:54.992431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.70:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:55.233887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.70:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:55.234001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.70:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:55.498272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.70:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:55.498399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.70:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:55.867188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.70:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:55.867263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.70:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:59.506956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.70:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:59.507082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.70:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:59.805677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.70:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:59.805807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.70:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:03:03.619138       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 19:03:03.619880       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0906 19:03:03.674248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 19:03:03.674446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:03:03.674405       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 19:03:03.674591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0906 19:03:27.129977       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 19:10:35 ha-313128 kubelet[1323]: E0906 19:10:35.789274    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649835788990570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:10:35 ha-313128 kubelet[1323]: E0906 19:10:35.789322    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649835788990570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:10:45 ha-313128 kubelet[1323]: E0906 19:10:45.791930    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649845791439649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:10:45 ha-313128 kubelet[1323]: E0906 19:10:45.792000    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649845791439649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:10:55 ha-313128 kubelet[1323]: E0906 19:10:55.795352    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649855794384262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:10:55 ha-313128 kubelet[1323]: E0906 19:10:55.795599    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649855794384262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:05 ha-313128 kubelet[1323]: E0906 19:11:05.797123    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649865796872569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:05 ha-313128 kubelet[1323]: E0906 19:11:05.797156    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649865796872569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:15 ha-313128 kubelet[1323]: E0906 19:11:15.799819    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649875799094951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:15 ha-313128 kubelet[1323]: E0906 19:11:15.799846    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649875799094951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:25 ha-313128 kubelet[1323]: E0906 19:11:25.512333    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:11:25 ha-313128 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:11:25 ha-313128 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:11:25 ha-313128 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:11:25 ha-313128 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:11:25 ha-313128 kubelet[1323]: E0906 19:11:25.801451    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649885801143816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:25 ha-313128 kubelet[1323]: E0906 19:11:25.801514    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649885801143816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:35 ha-313128 kubelet[1323]: E0906 19:11:35.803389    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649895802942496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:35 ha-313128 kubelet[1323]: E0906 19:11:35.803428    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649895802942496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:45 ha-313128 kubelet[1323]: E0906 19:11:45.805656    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649905804829244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:45 ha-313128 kubelet[1323]: E0906 19:11:45.805702    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649905804829244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:55 ha-313128 kubelet[1323]: E0906 19:11:55.812722    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649915812247276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:55 ha-313128 kubelet[1323]: E0906 19:11:55.813115    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649915812247276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:12:05 ha-313128 kubelet[1323]: E0906 19:12:05.815335    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649925814398837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:12:05 ha-313128 kubelet[1323]: E0906 19:12:05.815359    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649925814398837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:12:12.389432   33583 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19576-6021/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-313128 -n ha-313128
helpers_test.go:261: (dbg) Run:  kubectl --context ha-313128 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-controller-manager-ha-313128-m03 kube-scheduler-ha-313128-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-313128 describe pod kube-controller-manager-ha-313128-m03 kube-scheduler-ha-313128-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-313128 describe pod kube-controller-manager-ha-313128-m03 kube-scheduler-ha-313128-m03: exit status 1 (61.094516ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-controller-manager-ha-313128-m03" not found
	Error from server (NotFound): pods "kube-scheduler-ha-313128-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-313128 describe pod kube-controller-manager-ha-313128-m03 kube-scheduler-ha-313128-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (814.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-313128 node delete m03 -v=7 --alsologtostderr: (15.296719957s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 7 (471.59605ms)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:12:29.986426   33855 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:12:29.986723   33855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:12:29.986732   33855 out.go:358] Setting ErrFile to fd 2...
	I0906 19:12:29.986737   33855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:12:29.986966   33855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:12:29.987178   33855 out.go:352] Setting JSON to false
	I0906 19:12:29.987205   33855 mustload.go:65] Loading cluster: ha-313128
	I0906 19:12:29.987309   33855 notify.go:220] Checking for updates...
	I0906 19:12:29.987724   33855 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:12:29.987745   33855 status.go:255] checking status of ha-313128 ...
	I0906 19:12:29.988215   33855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:12:29.988270   33855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:12:30.006693   33855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40647
	I0906 19:12:30.007073   33855 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:12:30.007748   33855 main.go:141] libmachine: Using API Version  1
	I0906 19:12:30.007788   33855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:12:30.008092   33855 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:12:30.008311   33855 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 19:12:30.010055   33855 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 19:12:30.010072   33855 host.go:66] Checking if "ha-313128" exists ...
	I0906 19:12:30.010483   33855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:12:30.010523   33855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:12:30.026496   33855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0906 19:12:30.026943   33855 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:12:30.027493   33855 main.go:141] libmachine: Using API Version  1
	I0906 19:12:30.027531   33855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:12:30.027860   33855 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:12:30.028085   33855 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:12:30.031614   33855 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:12:30.032115   33855 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:12:30.032155   33855 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:12:30.032320   33855 host.go:66] Checking if "ha-313128" exists ...
	I0906 19:12:30.032629   33855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:12:30.032665   33855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:12:30.050854   33855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32791
	I0906 19:12:30.051257   33855 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:12:30.051747   33855 main.go:141] libmachine: Using API Version  1
	I0906 19:12:30.051771   33855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:12:30.052065   33855 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:12:30.052238   33855 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:12:30.052410   33855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:12:30.052434   33855 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:12:30.054867   33855 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:12:30.055273   33855 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:12:30.055304   33855 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:12:30.055432   33855 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:12:30.055607   33855 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:12:30.055750   33855 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:12:30.055871   33855 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:12:30.141052   33855 ssh_runner.go:195] Run: systemctl --version
	I0906 19:12:30.147660   33855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:12:30.164086   33855 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 19:12:30.164119   33855 api_server.go:166] Checking apiserver status ...
	I0906 19:12:30.164155   33855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:12:30.179261   33855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5012/cgroup
	W0906 19:12:30.189400   33855 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5012/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 19:12:30.189449   33855 ssh_runner.go:195] Run: ls
	I0906 19:12:30.194048   33855 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 19:12:30.199413   33855 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 19:12:30.199450   33855 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 19:12:30.199464   33855 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:12:30.199483   33855 status.go:255] checking status of ha-313128-m02 ...
	I0906 19:12:30.199756   33855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:12:30.199796   33855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:12:30.215219   33855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40675
	I0906 19:12:30.215593   33855 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:12:30.216044   33855 main.go:141] libmachine: Using API Version  1
	I0906 19:12:30.216064   33855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:12:30.216355   33855 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:12:30.216549   33855 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 19:12:30.218128   33855 status.go:330] ha-313128-m02 host status = "Running" (err=<nil>)
	I0906 19:12:30.218144   33855 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 19:12:30.218427   33855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:12:30.218464   33855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:12:30.233559   33855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
	I0906 19:12:30.234007   33855 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:12:30.234512   33855 main.go:141] libmachine: Using API Version  1
	I0906 19:12:30.234531   33855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:12:30.234798   33855 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:12:30.234985   33855 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 19:12:30.237481   33855 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:12:30.237832   33855 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 20:02:28 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 19:12:30.237851   33855 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:12:30.237994   33855 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 19:12:30.238299   33855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:12:30.238330   33855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:12:30.253119   33855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46671
	I0906 19:12:30.253557   33855 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:12:30.254017   33855 main.go:141] libmachine: Using API Version  1
	I0906 19:12:30.254034   33855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:12:30.254284   33855 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:12:30.254451   33855 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 19:12:30.254600   33855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:12:30.254617   33855 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 19:12:30.257488   33855 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:12:30.257919   33855 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 20:02:28 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 19:12:30.257942   33855 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:12:30.258115   33855 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 19:12:30.258301   33855 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 19:12:30.258423   33855 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 19:12:30.258532   33855 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 19:12:30.345754   33855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:12:30.362114   33855 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 19:12:30.362138   33855 api_server.go:166] Checking apiserver status ...
	I0906 19:12:30.362172   33855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:12:30.378087   33855 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup
	W0906 19:12:30.387320   33855 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1544/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 19:12:30.387364   33855 ssh_runner.go:195] Run: ls
	I0906 19:12:30.394456   33855 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 19:12:30.398685   33855 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0906 19:12:30.398713   33855 status.go:422] ha-313128-m02 apiserver status = Running (err=<nil>)
	I0906 19:12:30.398723   33855 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:12:30.398741   33855 status.go:255] checking status of ha-313128-m04 ...
	I0906 19:12:30.399036   33855 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:12:30.399076   33855 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:12:30.413972   33855 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35623
	I0906 19:12:30.414402   33855 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:12:30.414872   33855 main.go:141] libmachine: Using API Version  1
	I0906 19:12:30.414895   33855 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:12:30.415201   33855 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:12:30.415397   33855 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 19:12:30.416946   33855 status.go:330] ha-313128-m04 host status = "Stopped" (err=<nil>)
	I0906 19:12:30.416963   33855 status.go:343] host is not running, skipping remaining checks
	I0906 19:12:30.416971   33855 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-313128 -n ha-313128
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-313128 logs -n 25: (1.681729746s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m04 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp testdata/cp-test.txt                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128:/home/docker/cp-test_ha-313128-m04_ha-313128.txt                       |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128 sudo cat                                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128.txt                                 |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m02:/home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03:/home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m03 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-313128 node stop m02 -v=7                                                     | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-313128 node start m02 -v=7                                                    | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-313128 -v=7                                                           | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-313128 -v=7                                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-313128 --wait=true -v=7                                                    | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 19:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-313128                                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 19:12 UTC |                     |
	| node    | ha-313128 node delete m03 -v=7                                                   | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 19:12 UTC | 06 Sep 24 19:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 19:00:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:00:42.604662   30973 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:00:42.604922   30973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:42.604931   30973 out.go:358] Setting ErrFile to fd 2...
	I0906 19:00:42.604937   30973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:42.605118   30973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:00:42.605712   30973 out.go:352] Setting JSON to false
	I0906 19:00:42.606606   30973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2592,"bootTime":1725646651,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:00:42.606669   30973 start.go:139] virtualization: kvm guest
	I0906 19:00:42.609026   30973 out.go:177] * [ha-313128] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:00:42.610315   30973 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:00:42.610320   30973 notify.go:220] Checking for updates...
	I0906 19:00:42.612626   30973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:00:42.614046   30973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:00:42.615697   30973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:00:42.617289   30973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:00:42.618880   30973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:00:42.620642   30973 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:00:42.620737   30973 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:00:42.621181   30973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:00:42.621247   30973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:42.636849   30973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0906 19:00:42.637263   30973 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:42.637848   30973 main.go:141] libmachine: Using API Version  1
	I0906 19:00:42.637868   30973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:42.638214   30973 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:42.638435   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.676963   30973 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 19:00:42.678406   30973 start.go:297] selected driver: kvm2
	I0906 19:00:42.678423   30973 start.go:901] validating driver "kvm2" against &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default A
PIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headl
amp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:00:42.678622   30973 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:00:42.678996   30973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:00:42.679070   30973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:00:42.694855   30973 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:00:42.695667   30973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:00:42.695733   30973 cni.go:84] Creating CNI manager for ""
	I0906 19:00:42.695746   30973 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 19:00:42.695799   30973 start.go:340] cluster config:
	{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:00:42.695915   30973 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:00:42.698090   30973 out.go:177] * Starting "ha-313128" primary control-plane node in "ha-313128" cluster
	I0906 19:00:42.699706   30973 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:00:42.699746   30973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:00:42.699754   30973 cache.go:56] Caching tarball of preloaded images
	I0906 19:00:42.699837   30973 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:00:42.699848   30973 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:00:42.699961   30973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 19:00:42.700160   30973 start.go:360] acquireMachinesLock for ha-313128: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:00:42.700217   30973 start.go:364] duration metric: took 31.95µs to acquireMachinesLock for "ha-313128"
	I0906 19:00:42.700243   30973 start.go:96] Skipping create...Using existing machine configuration
	I0906 19:00:42.700253   30973 fix.go:54] fixHost starting: 
	I0906 19:00:42.700615   30973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:00:42.700669   30973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:42.715246   30973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0906 19:00:42.715721   30973 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:42.716296   30973 main.go:141] libmachine: Using API Version  1
	I0906 19:00:42.716319   30973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:42.716656   30973 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:42.716872   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.717048   30973 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 19:00:42.718801   30973 fix.go:112] recreateIfNeeded on ha-313128: state=Running err=<nil>
	W0906 19:00:42.718818   30973 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 19:00:42.722091   30973 out.go:177] * Updating the running kvm2 "ha-313128" VM ...
	I0906 19:00:42.723320   30973 machine.go:93] provisionDockerMachine start ...
	I0906 19:00:42.723341   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.723593   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.726581   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.727062   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.727086   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.727274   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.727450   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.727600   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.727717   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.727841   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.728035   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.728049   30973 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 19:00:42.842622   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 19:00:42.842652   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:42.842912   30973 buildroot.go:166] provisioning hostname "ha-313128"
	I0906 19:00:42.842943   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:42.843128   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.845900   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.846338   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.846367   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.846533   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.846705   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.846862   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.846998   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.847138   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.847339   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.847355   30973 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128 && echo "ha-313128" | sudo tee /etc/hostname
	I0906 19:00:42.971699   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 19:00:42.971726   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.974199   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.974577   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.974616   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.974777   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.974955   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.975110   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.975250   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.975389   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.975547   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.975561   30973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:00:43.086298   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:00:43.086336   30973 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:00:43.086383   30973 buildroot.go:174] setting up certificates
	I0906 19:00:43.086397   30973 provision.go:84] configureAuth start
	I0906 19:00:43.086411   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:43.086768   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:00:43.089761   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.090172   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.090221   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.090371   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.092707   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.093131   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.093150   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.093281   30973 provision.go:143] copyHostCerts
	I0906 19:00:43.093308   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:00:43.093346   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:00:43.093371   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:00:43.093449   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:00:43.093549   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:00:43.093574   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:00:43.093581   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:00:43.093618   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:00:43.093687   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:00:43.093709   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:00:43.093714   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:00:43.093750   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:00:43.093833   30973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128 san=[127.0.0.1 192.168.39.70 ha-313128 localhost minikube]
	I0906 19:00:43.258285   30973 provision.go:177] copyRemoteCerts
	I0906 19:00:43.258366   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:00:43.258394   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.260947   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.261383   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.261412   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.261600   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:43.261791   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.261926   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:43.262075   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:00:43.348224   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 19:00:43.348285   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:00:43.374716   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 19:00:43.374792   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0906 19:00:43.403028   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 19:00:43.403095   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 19:00:43.428263   30973 provision.go:87] duration metric: took 341.855389ms to configureAuth
	I0906 19:00:43.428293   30973 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:00:43.428524   30973 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:00:43.428598   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.431629   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.432063   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.432090   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.432269   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:43.432477   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.432645   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.432802   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:43.432969   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:43.433127   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:43.433144   30973 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:02:14.266261   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:02:14.266292   30973 machine.go:96] duration metric: took 1m31.542957549s to provisionDockerMachine
	I0906 19:02:14.266304   30973 start.go:293] postStartSetup for "ha-313128" (driver="kvm2")
	I0906 19:02:14.266315   30973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:02:14.266329   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.266669   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:02:14.266694   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.270021   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.270486   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.270511   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.270640   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.270873   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.271053   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.271182   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.357410   30973 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:02:14.362343   30973 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:02:14.362367   30973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:02:14.362428   30973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:02:14.362506   30973 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:02:14.362518   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 19:02:14.362611   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:02:14.372770   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:02:14.400357   30973 start.go:296] duration metric: took 134.040576ms for postStartSetup
	I0906 19:02:14.400419   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.400730   30973 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 19:02:14.400755   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.403411   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.403817   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.403842   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.403988   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.404164   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.404325   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.404472   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	W0906 19:02:14.487375   30973 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0906 19:02:14.487427   30973 fix.go:56] duration metric: took 1m31.787174067s for fixHost
	I0906 19:02:14.487448   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.490126   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.490510   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.490541   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.490726   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.490930   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.491084   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.491223   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.491366   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:02:14.491537   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:02:14.491547   30973 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:02:14.598045   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649334.553360444
	
	I0906 19:02:14.598070   30973 fix.go:216] guest clock: 1725649334.553360444
	I0906 19:02:14.598077   30973 fix.go:229] Guest: 2024-09-06 19:02:14.553360444 +0000 UTC Remote: 2024-09-06 19:02:14.487433708 +0000 UTC m=+91.917728709 (delta=65.926736ms)
	I0906 19:02:14.598105   30973 fix.go:200] guest clock delta is within tolerance: 65.926736ms
	I0906 19:02:14.598121   30973 start.go:83] releasing machines lock for "ha-313128", held for 1m31.897881945s
	I0906 19:02:14.598147   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.598410   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:02:14.600993   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.601335   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.601359   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.601535   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602064   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602246   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602360   30973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:02:14.602395   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.602490   30973 ssh_runner.go:195] Run: cat /version.json
	I0906 19:02:14.602505   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.605042   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605172   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605395   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.605418   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605547   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.605652   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.605677   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605689   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.605801   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.605856   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.605923   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.606008   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.606047   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.606191   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.682320   30973 ssh_runner.go:195] Run: systemctl --version
	I0906 19:02:14.707871   30973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:02:14.868709   30973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:02:14.878107   30973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:02:14.878182   30973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:02:14.887795   30973 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 19:02:14.887825   30973 start.go:495] detecting cgroup driver to use...
	I0906 19:02:14.887900   30973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:02:14.905023   30973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:02:14.920380   30973 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:02:14.920478   30973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:02:14.936661   30973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:02:14.951264   30973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:02:15.102677   30973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:02:15.248271   30973 docker.go:233] disabling docker service ...
	I0906 19:02:15.248331   30973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:02:15.264423   30973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:02:15.278696   30973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:02:15.426846   30973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:02:15.574956   30973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:02:15.589843   30973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:02:15.609432   30973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 19:02:15.609504   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.620399   30973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:02:15.620463   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.630897   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.641484   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.651945   30973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:02:15.663429   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.674521   30973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.689183   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.700177   30973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:02:15.710433   30973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:02:15.720027   30973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:02:15.864474   30973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:02:16.100883   30973 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:02:16.100949   30973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:02:16.106267   30973 start.go:563] Will wait 60s for crictl version
	I0906 19:02:16.106339   30973 ssh_runner.go:195] Run: which crictl
	I0906 19:02:16.110880   30973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:02:16.149993   30973 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:02:16.150090   30973 ssh_runner.go:195] Run: crio --version
	I0906 19:02:16.181738   30973 ssh_runner.go:195] Run: crio --version
	I0906 19:02:16.215139   30973 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 19:02:16.216581   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:02:16.219061   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:16.219402   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:16.219431   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:16.219550   30973 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 19:02:16.224692   30973 kubeadm.go:883] updating cluster {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:02:16.224825   30973 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:02:16.224887   30973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:02:16.279712   30973 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:02:16.279734   30973 crio.go:433] Images already preloaded, skipping extraction
	I0906 19:02:16.279784   30973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:02:16.314787   30973 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:02:16.314818   30973 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:02:16.314830   30973 kubeadm.go:934] updating node { 192.168.39.70 8443 v1.31.0 crio true true} ...
	I0906 19:02:16.314943   30973 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:02:16.315021   30973 ssh_runner.go:195] Run: crio config
	I0906 19:02:16.364038   30973 cni.go:84] Creating CNI manager for ""
	I0906 19:02:16.364072   30973 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 19:02:16.364092   30973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:02:16.364128   30973 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-313128 NodeName:ha-313128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:02:16.364353   30973 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-313128"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:02:16.364385   30973 kube-vip.go:115] generating kube-vip config ...
	I0906 19:02:16.364438   30973 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 19:02:16.376810   30973 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 19:02:16.376947   30973 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 19:02:16.377010   30973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 19:02:16.386554   30973 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:02:16.386654   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 19:02:16.396282   30973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 19:02:16.413426   30973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:02:16.430809   30973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0906 19:02:16.447378   30973 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0906 19:02:16.464060   30973 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 19:02:16.469045   30973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:02:16.610775   30973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:02:16.625535   30973 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.70
	I0906 19:02:16.625562   30973 certs.go:194] generating shared ca certs ...
	I0906 19:02:16.625577   30973 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.625717   30973 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:02:16.625753   30973 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:02:16.625762   30973 certs.go:256] generating profile certs ...
	I0906 19:02:16.625841   30973 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 19:02:16.625866   30973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c
	I0906 19:02:16.625879   30973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.32 192.168.39.172 192.168.39.254]
	I0906 19:02:16.804798   30973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c ...
	I0906 19:02:16.804827   30973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c: {Name:mkbad82bfe626c7b530e91f2fb1afe292d0ae161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.805001   30973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c ...
	I0906 19:02:16.805015   30973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c: {Name:mk0ae7f160e2379f6800fc471c87e5a6b8b93da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.805088   30973 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 19:02:16.805220   30973 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 19:02:16.805349   30973 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 19:02:16.805363   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 19:02:16.805378   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 19:02:16.805391   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 19:02:16.805424   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 19:02:16.805440   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 19:02:16.805451   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 19:02:16.805460   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 19:02:16.805469   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 19:02:16.805512   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:02:16.805541   30973 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:02:16.805551   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:02:16.805578   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:02:16.805605   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:02:16.805628   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:02:16.805663   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:02:16.805690   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 19:02:16.805703   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:16.805716   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 19:02:16.806296   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:02:16.832409   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:02:16.856617   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:02:16.883121   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:02:16.908841   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 19:02:16.934050   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 19:02:16.957637   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:02:16.982352   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 19:02:17.007984   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:02:17.034211   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:02:17.058444   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:02:17.082266   30973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:02:17.099732   30973 ssh_runner.go:195] Run: openssl version
	I0906 19:02:17.105835   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:02:17.117417   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.122102   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.122167   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.127926   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:02:17.137341   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:02:17.147895   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.152327   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.152384   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.158147   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:02:17.167715   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:02:17.179028   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.183445   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.183521   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.189253   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:02:17.198545   30973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:02:17.203152   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 19:02:17.208885   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 19:02:17.214536   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 19:02:17.220261   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 19:02:17.226142   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 19:02:17.231663   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 19:02:17.237142   30973 kubeadm.go:392] StartCluster: {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm
-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:02:17.237264   30973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:02:17.237316   30973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:02:17.274034   30973 cri.go:89] found id: "9103596edb635c85d04deccce75e13f1cd3262538a222b30a0c94e764770d28c"
	I0906 19:02:17.274063   30973 cri.go:89] found id: "15aafcfc8e779931ee6d9a42dd1aab5a06c3de9f67ec6b3feb49305eed4103e0"
	I0906 19:02:17.274069   30973 cri.go:89] found id: "8fa4e79af67df589d61af4ab106d80e16d119e6feed8deff5827505fa804474c"
	I0906 19:02:17.274074   30973 cri.go:89] found id: "5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939"
	I0906 19:02:17.274078   30973 cri.go:89] found id: "ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d"
	I0906 19:02:17.274083   30973 cri.go:89] found id: "76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa"
	I0906 19:02:17.274087   30973 cri.go:89] found id: "76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b"
	I0906 19:02:17.274091   30973 cri.go:89] found id: "135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1"
	I0906 19:02:17.274095   30973 cri.go:89] found id: "13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d"
	I0906 19:02:17.274104   30973 cri.go:89] found id: "7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f"
	I0906 19:02:17.274108   30973 cri.go:89] found id: "9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387"
	I0906 19:02:17.274112   30973 cri.go:89] found id: "e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8"
	I0906 19:02:17.274116   30973 cri.go:89] found id: "a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f"
	I0906 19:02:17.274121   30973 cri.go:89] found id: ""
	I0906 19:02:17.274164   30973 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.006959608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649951006938812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1113b133-db6a-4a2c-b4df-bd74cceeca27 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.007679608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0cda838e-f70d-4ca4-a95e-7541e401b5de name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.007881724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0cda838e-f70d-4ca4-a95e-7541e401b5de name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.008794799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725649381484380479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5797d04311d37274a0a42cb3eebc2559a195c1202cabadd8d4b2208bf93cc186,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649370481993651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117
b7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85
ebe5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725649343291112866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0cda838e-f70d-4ca4-a95e-7541e401b5de name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.054579029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e4f0dca-f07a-4346-887a-e34502abe89e name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.054681632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e4f0dca-f07a-4346-887a-e34502abe89e name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.056066958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8890ac59-b02c-4bd1-9b76-0ec1713940c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.056588314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649951056559523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8890ac59-b02c-4bd1-9b76-0ec1713940c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.057236709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83e6e34f-ea15-43b0-aafe-4be29f2514ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.057328839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83e6e34f-ea15-43b0-aafe-4be29f2514ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.057777461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725649381484380479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5797d04311d37274a0a42cb3eebc2559a195c1202cabadd8d4b2208bf93cc186,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649370481993651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117
b7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85
ebe5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725649343291112866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83e6e34f-ea15-43b0-aafe-4be29f2514ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.108299645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f08b2f74-709e-45c3-9d6b-26e470734651 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.108403383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f08b2f74-709e-45c3-9d6b-26e470734651 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.110186870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73cb5909-41e6-43f6-adac-32dc78c548eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.110882136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649951110843714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73cb5909-41e6-43f6-adac-32dc78c548eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.112137068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a1e84c6-4e3f-4f0f-91b0-aebfbbdbe445 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.112219805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a1e84c6-4e3f-4f0f-91b0-aebfbbdbe445 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.112746237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725649381484380479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5797d04311d37274a0a42cb3eebc2559a195c1202cabadd8d4b2208bf93cc186,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649370481993651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117
b7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85
ebe5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725649343291112866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a1e84c6-4e3f-4f0f-91b0-aebfbbdbe445 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.154531973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7da0176e-8616-4c6d-9239-0a41bf2ee6a8 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.154605295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7da0176e-8616-4c6d-9239-0a41bf2ee6a8 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.155960210Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17884ab3-8a48-4d76-a77d-184d5f9c1028 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.156389074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649951156365259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17884ab3-8a48-4d76-a77d-184d5f9c1028 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.156916955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbdea1cf-91ad-4703-aef7-e4ebd24316a9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.156970575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbdea1cf-91ad-4703-aef7-e4ebd24316a9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:12:31 ha-313128 crio[3609]: time="2024-09-06 19:12:31.157423469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725649381484380479,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5797d04311d37274a0a42cb3eebc2559a195c1202cabadd8d4b2208bf93cc186,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649370481993651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117
b7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85
ebe5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725649343291112866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cbdea1cf-91ad-4703-aef7-e4ebd24316a9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	563331b1df56b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       4                   a0a256d64c27f       storage-provisioner
	1cfd32c774caf       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      9 minutes ago       Running             kube-controller-manager   2                   4c7e7fc7137a0       kube-controller-manager-ha-313128
	8ef80321a5967       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      9 minutes ago       Running             kube-apiserver            3                   3ae5e99906a2e       kube-apiserver-ha-313128
	9d3f5b10c63ca       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      9 minutes ago       Running             busybox                   1                   7cbf701e90a6f       busybox-7dff88458-s2cgz
	5797d04311d37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       3                   a0a256d64c27f       storage-provisioner
	a170bb1c8a3cb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      9 minutes ago       Running             kube-vip                  0                   9a8c2a564ace3       kube-vip-ha-313128
	d3e14bee704aa       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Running             kindnet-cni               1                   419150e9a53e3       kindnet-h2trt
	bea01e33385d8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Running             kube-scheduler            1                   64b8d66092688       kube-scheduler-ha-313128
	36d954de08dab       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Running             etcd                      1                   54824bb3087ee       etcd-ha-313128
	7fab375b2e00c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   1                   4c7e7fc7137a0       kube-controller-manager-ha-313128
	25ee04d39c4c9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Running             coredns                   1                   d481cfc1806b6       coredns-6f6b679f8f-gccvh
	57ce31a9f3420       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            2                   3ae5e99906a2e       kube-apiserver-ha-313128
	77c80de1adc0a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Running             kube-proxy                1                   e453276f34782       kube-proxy-h5xn7
	f78069cd2a935       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Running             coredns                   1                   7356e11979968       coredns-6f6b679f8f-gk28z
	7b3f2cd2f6c9c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   18 minutes ago      Exited              busybox                   0                   74b84ec8f17a7       busybox-7dff88458-s2cgz
	5b950806bc4b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Exited              coredns                   0                   9151daea570f3       coredns-6f6b679f8f-gk28z
	76bbd732b8695       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Exited              coredns                   0                   8449d8c8bfa3e       coredns-6f6b679f8f-gccvh
	76ca94f153009       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    20 minutes ago      Exited              kindnet-cni               0                   a3128d8e090be       kindnet-h2trt
	135074e446370       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      21 minutes ago      Exited              kube-proxy                0                   dde7791c0770a       kube-proxy-h5xn7
	e32b22b9f83ac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Exited              etcd                      0                   0ced27e2ded46       etcd-ha-313128
	a406aeec43303       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      21 minutes ago      Exited              kube-scheduler            0                   aeb85ed29ab1d       kube-scheduler-ha-313128
	
	
	==> coredns [25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85ebe5629edada2adae88766] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1569945278]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:02:25.248) (total time: 10000ms):
	Trace[1569945278]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (19:02:35.249)
	Trace[1569945278]: [10.000824892s] [10.000824892s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939] <==
	[INFO] 10.244.0.4:42561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009569s
	[INFO] 10.244.0.4:55114 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086084s
	[INFO] 10.244.0.4:53953 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067022s
	[INFO] 10.244.1.2:48594 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121564s
	[INFO] 10.244.1.2:53114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166914s
	[INFO] 10.244.2.2:34659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158468s
	[INFO] 10.244.2.2:34171 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176512s
	[INFO] 10.244.0.4:58990 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009694s
	[INFO] 10.244.0.4:43562 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118003s
	[INFO] 10.244.0.4:33609 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086781s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1840&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1840&timeout=6m57s&timeoutSeconds=417&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[442499001]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.006) (total time: 13786ms):
	Trace[442499001]: ---"Objects listed" error:Unauthorized 13786ms (19:00:41.792)
	Trace[442499001]: [13.786231314s] [13.786231314s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[85447720]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.308) (total time: 13485ms):
	Trace[85447720]: ---"Objects listed" error:Unauthorized 13484ms (19:00:41.792)
	Trace[85447720]: [13.485399749s] [13.485399749s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa] <==
	[INFO] 10.244.1.2:35244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089298s
	[INFO] 10.244.1.2:54461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083864s
	[INFO] 10.244.2.2:46046 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126212s
	[INFO] 10.244.2.2:45762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078805s
	[INFO] 10.244.0.4:56166 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109081s
	[INFO] 10.244.1.2:44485 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175559s
	[INFO] 10.244.1.2:60331 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113433s
	[INFO] 10.244.2.2:33944 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094759s
	[INFO] 10.244.2.2:54249 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007626s
	[INFO] 10.244.0.4:34049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091783s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1840&timeout=6m52s&timeoutSeconds=412&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1840&timeout=9m31s&timeoutSeconds=571&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1362283421]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.166) (total time: 13625ms):
	Trace[1362283421]: ---"Objects listed" error:Unauthorized 13625ms (19:00:41.791)
	Trace[1362283421]: [13.625497855s] [13.625497855s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2000776186]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.018) (total time: 13773ms):
	Trace[2000776186]: ---"Objects listed" error:Unauthorized 13773ms (19:00:41.792)
	Trace[2000776186]: [13.773675488s] [13.773675488s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1498074878]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:02:28.897) (total time: 10001ms):
	Trace[1498074878]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:02:38.898)
	Trace[1498074878]: [10.001684214s] [10.001684214s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:54584->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:54584->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-313128
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_51_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:12:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:08:18 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:08:18 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:08:18 +0000   Fri, 06 Sep 2024 18:51:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:08:18 +0000   Fri, 06 Sep 2024 18:51:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-313128
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a8374058d8a4ce69ddf9d9b9a6bab88
	  System UUID:                5a837405-8d8a-4ce6-9ddf-9d9b9a6bab88
	  Boot ID:                    4ac8491f-e614-44c2-96e0-f1733bbe0f17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s2cgz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-6f6b679f8f-gccvh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-6f6b679f8f-gk28z             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-ha-313128                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-h2trt                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-313128             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-313128    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-h5xn7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-313128             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-313128                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 9m26s              kube-proxy       
	  Normal   NodeHasSufficientPID     21m                kubelet          Node ha-313128 status is now: NodeHasSufficientPID
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node ha-313128 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node ha-313128 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           21m                node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal   NodeReady                20m                kubelet          Node ha-313128 status is now: NodeReady
	  Normal   RegisteredNode           20m                node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             10m (x3 over 11m)  kubelet          Node ha-313128 status is now: NodeNotReady
	  Normal   RegisteredNode           9m31s              node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	  Normal   RegisteredNode           9m24s              node-controller  Node ha-313128 event: Registered Node ha-313128 in Controller
	
	
	Name:               ha-313128-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_52_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:52:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:12:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:08:53 +0000   Fri, 06 Sep 2024 19:03:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:08:53 +0000   Fri, 06 Sep 2024 19:03:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:08:53 +0000   Fri, 06 Sep 2024 19:03:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:08:53 +0000   Fri, 06 Sep 2024 19:03:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    ha-313128-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9324a423f4b54997b7d3837f23afbaaf
	  System UUID:                9324a423-f4b5-4997-b7d3-837f23afbaaf
	  Boot ID:                    b4095a80-7da5-4719-8b2f-897a61c535e2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-54m66                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-313128-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kindnet-t65ls                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-apiserver-ha-313128-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-313128-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-xjp6p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-313128-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-313128-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 20m                    kube-proxy       
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  20m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                    node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)      kubelet          Node ha-313128-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)      kubelet          Node ha-313128-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)      kubelet          Node ha-313128-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m                    node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  NodeNotReady             16m                    node-controller  Node ha-313128-m02 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  9m51s (x8 over 9m51s)  kubelet          Node ha-313128-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    9m51s (x8 over 9m51s)  kubelet          Node ha-313128-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m51s (x7 over 9m51s)  kubelet          Node ha-313128-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m31s                  node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	  Normal  RegisteredNode           9m24s                  node-controller  Node ha-313128-m02 event: Registered Node ha-313128-m02 in Controller
	
	
	Name:               ha-313128-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-313128-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=ha-313128
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T18_54_39_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:54:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-313128-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:58:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 19:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 19:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 19:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 18:55:09 +0000   Fri, 06 Sep 2024 19:03:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    ha-313128-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1284faaf1604a6db25bba3bb7ed5953
	  System UUID:                f1284faa-f160-4a6d-b25b-ba3bb7ed5953
	  Boot ID:                    25844c67-e2f9-444b-99b9-94b7e385f59f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fsbs9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-proxy-8tm7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node ha-313128-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node ha-313128-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node ha-313128-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-313128-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m31s              node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  RegisteredNode           9m24s              node-controller  Node ha-313128-m04 event: Registered Node ha-313128-m04 in Controller
	  Normal  NodeNotReady             8m51s              node-controller  Node ha-313128-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 18:51] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072122] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.201564] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.131661] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.284243] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.067260] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.541515] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.060417] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251462] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.088029] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.073110] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.070796] kauditd_printk_skb: 38 callbacks suppressed
	[Sep 6 18:52] kauditd_printk_skb: 24 callbacks suppressed
	[Sep 6 19:02] systemd-fstab-generator[3534]: Ignoring "noauto" option for root device
	[  +0.149812] systemd-fstab-generator[3546]: Ignoring "noauto" option for root device
	[  +0.178231] systemd-fstab-generator[3560]: Ignoring "noauto" option for root device
	[  +0.144887] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +0.283505] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +0.753951] systemd-fstab-generator[3694]: Ignoring "noauto" option for root device
	[  +6.401831] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.203445] kauditd_printk_skb: 87 callbacks suppressed
	[Sep 6 19:03] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a] <==
	{"level":"info","ts":"2024-09-06T19:03:51.996038Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:03:52.002402Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:03:52.015734Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d9e0442f914d2c09","to":"63c578731edaad90","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-06T19:03:52.015842Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:03:52.027875Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"d9e0442f914d2c09","to":"63c578731edaad90","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-06T19:03:52.028748Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:12:18.625779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 switched to configuration voters=(6652057935522279523 15699623272105454601)"}
	{"level":"info","ts":"2024-09-06T19:12:18.629608Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b9ca18127a3e3182","local-member-id":"d9e0442f914d2c09","removed-remote-peer-id":"63c578731edaad90","removed-remote-peer-urls":["https://192.168.39.172:2380"]}
	{"level":"info","ts":"2024-09-06T19:12:18.629776Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:12:18.630125Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:12:18.630319Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:12:18.630854Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:12:18.631249Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:12:18.630189Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"d9e0442f914d2c09","removed-member-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:12:18.631663Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-09-06T19:12:18.631639Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:12:18.632126Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","error":"context canceled"}
	{"level":"warn","ts":"2024-09-06T19:12:18.635367Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"63c578731edaad90","error":"failed to read 63c578731edaad90 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-06T19:12:18.635815Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:12:18.635983Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90","error":"context canceled"}
	{"level":"info","ts":"2024-09-06T19:12:18.636039Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:12:18.636130Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:12:18.636162Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"d9e0442f914d2c09","removed-remote-peer-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:12:18.655801Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"d9e0442f914d2c09","remote-peer-id-stream-handler":"d9e0442f914d2c09","remote-peer-id-from":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:12:18.658925Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"d9e0442f914d2c09","remote-peer-id-stream-handler":"d9e0442f914d2c09","remote-peer-id-from":"63c578731edaad90"}
	
	
	==> etcd [e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8] <==
	2024/09/06 19:00:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/06 19:00:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:00:43.692458Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.70:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:00:43.692566Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.70:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:00:43.692746Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"d9e0442f914d2c09","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-06T19:00:43.692938Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.692970Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693066Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693169Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693208Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693273Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693377Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693398Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693561Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693580Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693692Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693802Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693879Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693927Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:00:43.697978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.908057312s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-06T19:00:43.698055Z","caller":"traceutil/trace.go:171","msg":"trace[1664952415] range","detail":"{range_begin:; range_end:; }","duration":"1.908148083s","start":"2024-09-06T19:00:41.789897Z","end":"2024-09-06T19:00:43.698045Z","steps":["trace[1664952415] 'agreement among raft nodes before linearized reading'  (duration: 1.908055362s)"],"step_count":1}
	{"level":"error","ts":"2024-09-06T19:00:43.698121Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-06T19:00:43.697906Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-09-06T19:00:43.698981Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-09-06T19:00:43.699230Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-313128","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.70:2380"],"advertise-client-urls":["https://192.168.39.70:2379"]}
	
	
	==> kernel <==
	 19:12:31 up 21 min,  0 users,  load average: 0.32, 0.24, 0.20
	Linux ha-313128 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b] <==
	I0906 19:00:13.769733       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:00:13.769887       1 main.go:299] handling current node
	I0906 19:00:13.770039       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:00:13.770068       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:00:13.770242       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:00:13.770325       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:00:13.770653       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:00:13.770688       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:00:23.769408       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:00:23.769600       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:00:23.769804       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:00:23.769858       1 main.go:299] handling current node
	I0906 19:00:23.769886       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:00:23.769939       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:00:23.770105       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:00:23.770150       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	E0906 19:00:26.750051       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1815&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0906 19:00:33.769131       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:00:33.769253       1 main.go:299] handling current node
	I0906 19:00:33.769290       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:00:33.769309       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:00:33.769557       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:00:33.769593       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:00:33.769784       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:00:33.769821       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0] <==
	I0906 19:11:54.881109       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:12:04.880583       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:12:04.880697       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:12:04.880835       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:12:04.880867       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:12:04.880949       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:12:04.880980       1 main.go:299] handling current node
	I0906 19:12:04.881002       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:12:04.881018       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:12:14.877332       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:12:14.877550       1 main.go:299] handling current node
	I0906 19:12:14.877628       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:12:14.877662       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:12:14.877953       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:12:14.878003       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:12:14.878087       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:12:14.878107       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:12:24.874981       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:12:24.875039       1 main.go:299] handling current node
	I0906 19:12:24.875053       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:12:24.875059       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:12:24.875197       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:12:24.875203       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:12:24.875358       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:12:24.875382       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [57ce31a9f342048ead1d321eaeb8e3938678e106ee41e891c231022c89806e9f] <==
	I0906 19:02:24.035108       1 options.go:228] external host was not specified, using 192.168.39.70
	I0906 19:02:24.043154       1 server.go:142] Version: v1.31.0
	I0906 19:02:24.043215       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:02:24.501757       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:02:24.505574       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:02:24.510198       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:02:24.510268       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:02:24.510576       1 instance.go:232] Using reconciler: lease
	W0906 19:02:44.499795       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:44.499795       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0906 19:02:44.511859       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0906 19:02:44.511884       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40] <==
	I0906 19:03:03.588901       1 aggregator.go:169] waiting for initial CRD sync...
	I0906 19:03:03.588911       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0906 19:03:03.688787       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:03:03.690694       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:03:03.690782       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0906 19:03:03.690838       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:03:03.695425       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:03:03.695465       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:03:03.695522       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:03:03.695528       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:03:03.695533       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:03:03.697257       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0906 19:03:03.700575       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.172 192.168.39.32]
	I0906 19:03:03.712722       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:03:03.712957       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:03:03.712998       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:03:03.716307       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:03:03.722515       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:03:03.722569       1 policy_source.go:224] refreshing policies
	I0906 19:03:03.779663       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:03:03.802361       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:03:03.811625       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0906 19:03:03.814925       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0906 19:03:04.604439       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:03:05.231781       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.172 192.168.39.32 192.168.39.70]
	
	
	==> kube-controller-manager [1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced] <==
	I0906 19:03:46.066221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 19:03:47.317715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:03:47.336868       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:03:48.130331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.879µs"
	I0906 19:03:48.603204       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 19:03:50.984555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:03:52.219590       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m04"
	I0906 19:04:07.524796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.759175ms"
	I0906 19:04:07.527288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="84.501µs"
	I0906 19:04:17.628091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:08:18.837233       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128"
	I0906 19:08:53.572898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m02"
	I0906 19:09:23.576952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:12:15.218392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:12:15.241187       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	I0906 19:12:15.376826       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="103.787507ms"
	I0906 19:12:15.402875       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.986343ms"
	I0906 19:12:15.419849       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.917283ms"
	I0906 19:12:15.420160       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.022µs"
	I0906 19:12:15.458788       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.324944ms"
	I0906 19:12:15.459600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="139.557µs"
	I0906 19:12:17.444318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.984µs"
	I0906 19:12:18.015704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.964µs"
	I0906 19:12:18.020646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.273µs"
	I0906 19:12:29.197712       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-313128-m03"
	
	
	==> kube-controller-manager [7fab375b2e00c6c1c477e49d20575c282cf15631db08117b7cbd6669002057a7] <==
	I0906 19:02:25.178571       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:02:25.751144       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:02:25.751189       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:02:25.753093       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:02:25.753241       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:02:25.753740       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:02:25.753823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:02:45.757127       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.70:8443/healthz\": dial tcp 192.168.39.70:8443: connect: connection refused"
	
	
	==> kube-proxy [135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1] <==
	E0906 18:59:34.909836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:37.981970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:37.982117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:37.982239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:37.982324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:37.982672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:37.982810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:44.125613       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:44.125832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:44.125977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:44.126031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:47.196966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:47.197099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:53.343030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:53.343098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:53.343233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:53.343270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:02.557917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:02.558078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:11.774788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:11.774869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:20.989983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:20.990159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:24.061791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:24.062012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:02:26.941011       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0906 19:02:30.014890       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0906 19:02:33.085953       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0906 19:02:39.230311       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0906 19:02:48.446328       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0906 19:03:04.718998       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.70"]
	E0906 19:03:04.719127       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:03:04.788646       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:03:04.788721       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:03:04.790601       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:03:04.795556       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:03:04.800789       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:03:04.800820       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:03:04.804328       1 config.go:197] "Starting service config controller"
	I0906 19:03:04.804405       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:03:04.804517       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:03:04.804524       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:03:04.807066       1 config.go:326] "Starting node config controller"
	I0906 19:03:04.807107       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:03:04.905154       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:03:04.905252       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:03:04.907240       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f] <==
	E0906 18:54:39.143315       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8tm7b\": pod kube-proxy-8tm7b is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-8tm7b"
	I0906 18:54:39.143372       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8tm7b" node="ha-313128-m04"
	E0906 18:54:39.143180       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.144192       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fdc10711-7099-424e-885e-65589f5642e5(kube-system/kindnet-k9szn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k9szn"
	E0906 18:54:39.144252       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" pod="kube-system/kindnet-k9szn"
	I0906 18:54:39.144297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.236601       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	E0906 18:54:39.236925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-rnm78"
	I0906 18:54:39.240895       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	E0906 19:00:32.447945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0906 19:00:34.548228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:34.780196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0906 19:00:34.781245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0906 19:00:34.940096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:36.433926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0906 19:00:36.554090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0906 19:00:38.972401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0906 19:00:38.989303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:40.451678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:40.700326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0906 19:00:41.073589       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0906 19:00:41.963164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0906 19:00:42.456392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	I0906 19:00:43.531312       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0906 19:00:43.532028       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9] <==
	W0906 19:02:53.921617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.70:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:53.921659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.70:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:54.249952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.70:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:54.250053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.70:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:54.365726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.70:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:54.365850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.70:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:54.992268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.70:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:54.992431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.70:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:55.233887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.70:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:55.234001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.70:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:55.498272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.70:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:55.498399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.70:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:55.867188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.70:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:55.867263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.70:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:59.506956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.70:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:59.507082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.70:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:02:59.805677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.70:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:02:59.805807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.70:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:03:03.619138       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 19:03:03.619880       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0906 19:03:03.674248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 19:03:03.674446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:03:03.674405       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 19:03:03.674591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0906 19:03:27.129977       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 19:11:15 ha-313128 kubelet[1323]: E0906 19:11:15.799846    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649875799094951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:25 ha-313128 kubelet[1323]: E0906 19:11:25.512333    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:11:25 ha-313128 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:11:25 ha-313128 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:11:25 ha-313128 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:11:25 ha-313128 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:11:25 ha-313128 kubelet[1323]: E0906 19:11:25.801451    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649885801143816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:25 ha-313128 kubelet[1323]: E0906 19:11:25.801514    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649885801143816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:35 ha-313128 kubelet[1323]: E0906 19:11:35.803389    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649895802942496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:35 ha-313128 kubelet[1323]: E0906 19:11:35.803428    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649895802942496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:45 ha-313128 kubelet[1323]: E0906 19:11:45.805656    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649905804829244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:45 ha-313128 kubelet[1323]: E0906 19:11:45.805702    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649905804829244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:55 ha-313128 kubelet[1323]: E0906 19:11:55.812722    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649915812247276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:11:55 ha-313128 kubelet[1323]: E0906 19:11:55.813115    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649915812247276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:12:05 ha-313128 kubelet[1323]: E0906 19:12:05.815335    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649925814398837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:12:05 ha-313128 kubelet[1323]: E0906 19:12:05.815359    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649925814398837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:12:15 ha-313128 kubelet[1323]: E0906 19:12:15.819011    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649935818112074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:12:15 ha-313128 kubelet[1323]: E0906 19:12:15.819036    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649935818112074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:12:25 ha-313128 kubelet[1323]: E0906 19:12:25.511823    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:12:25 ha-313128 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:12:25 ha-313128 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:12:25 ha-313128 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:12:25 ha-313128 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:12:25 ha-313128 kubelet[1323]: E0906 19:12:25.820716    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649945820137255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:12:25 ha-313128 kubelet[1323]: E0906 19:12:25.820762    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725649945820137255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:12:30.725635   33938 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19576-6021/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-313128 -n ha-313128
helpers_test.go:261: (dbg) Run:  kubectl --context ha-313128 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-vt4pc kube-controller-manager-ha-313128-m03 kube-scheduler-ha-313128-m03
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-313128 describe pod busybox-7dff88458-vt4pc kube-controller-manager-ha-313128-m03 kube-scheduler-ha-313128-m03
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ha-313128 describe pod busybox-7dff88458-vt4pc kube-controller-manager-ha-313128-m03 kube-scheduler-ha-313128-m03: exit status 1 (76.48841ms)

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-vt4pc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hzhpc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hzhpc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  17s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15s                default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15s (x2 over 17s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kube-controller-manager-ha-313128-m03" not found
	Error from server (NotFound): pods "kube-scheduler-ha-313128-m03" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ha-313128 describe pod busybox-7dff88458-vt4pc kube-controller-manager-ha-313128-m03 kube-scheduler-ha-313128-m03: exit status 1
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (18.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (173.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 stop -v=7 --alsologtostderr: exit status 82 (2m2.154809043s)

                                                
                                                
-- stdout --
	* Stopping node "ha-313128-m04"  ...
	* Stopping node "ha-313128-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:12:33.191479   34075 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:12:33.191742   34075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:12:33.191753   34075 out.go:358] Setting ErrFile to fd 2...
	I0906 19:12:33.191759   34075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:12:33.191981   34075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:12:33.192222   34075 out.go:352] Setting JSON to false
	I0906 19:12:33.192315   34075 mustload.go:65] Loading cluster: ha-313128
	I0906 19:12:33.192666   34075 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:12:33.192767   34075 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 19:12:33.193082   34075 mustload.go:65] Loading cluster: ha-313128
	I0906 19:12:33.193248   34075 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:12:33.193289   34075 stop.go:39] StopHost: ha-313128-m04
	I0906 19:12:33.193704   34075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:12:33.193761   34075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:12:33.208288   34075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36419
	I0906 19:12:33.208727   34075 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:12:33.209241   34075 main.go:141] libmachine: Using API Version  1
	I0906 19:12:33.209271   34075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:12:33.209653   34075 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:12:33.211865   34075 out.go:177] * Stopping node "ha-313128-m04"  ...
	I0906 19:12:33.213073   34075 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 19:12:33.213099   34075 main.go:141] libmachine: (ha-313128-m04) Calling .DriverName
	I0906 19:12:33.213327   34075 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 19:12:33.213347   34075 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 19:12:33.214973   34075 retry.go:31] will retry after 350.542056ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0906 19:12:33.566521   34075 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 19:12:33.568036   34075 retry.go:31] will retry after 492.065219ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0906 19:12:34.060678   34075 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	I0906 19:12:34.062365   34075 retry.go:31] will retry after 826.439656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0906 19:12:34.889336   34075 main.go:141] libmachine: (ha-313128-m04) Calling .GetSSHHostname
	W0906 19:12:34.891025   34075 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0906 19:12:34.891080   34075 main.go:141] libmachine: Stopping "ha-313128-m04"...
	I0906 19:12:34.891092   34075 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 19:12:34.892216   34075 stop.go:66] stop err: Machine "ha-313128-m04" is already stopped.
	I0906 19:12:34.892240   34075 stop.go:69] host is already stopped
	I0906 19:12:34.892258   34075 stop.go:39] StopHost: ha-313128-m02
	I0906 19:12:34.892548   34075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:12:34.892585   34075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:12:34.907409   34075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0906 19:12:34.907769   34075 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:12:34.908205   34075 main.go:141] libmachine: Using API Version  1
	I0906 19:12:34.908222   34075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:12:34.908597   34075 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:12:34.910848   34075 out.go:177] * Stopping node "ha-313128-m02"  ...
	I0906 19:12:34.912159   34075 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 19:12:34.912177   34075 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 19:12:34.912390   34075 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 19:12:34.912415   34075 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 19:12:34.915175   34075 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:12:34.915634   34075 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 20:02:28 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 19:12:34.915664   34075 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:12:34.915815   34075 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 19:12:34.915977   34075 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 19:12:34.916121   34075 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 19:12:34.916281   34075 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	I0906 19:12:35.004532   34075 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0906 19:12:35.059716   34075 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0906 19:12:35.113030   34075 main.go:141] libmachine: Stopping "ha-313128-m02"...
	I0906 19:12:35.113057   34075 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 19:12:35.114483   34075 main.go:141] libmachine: (ha-313128-m02) Calling .Stop
	I0906 19:12:35.117656   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 0/120
	I0906 19:12:36.119067   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 1/120
	I0906 19:12:37.120420   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 2/120
	I0906 19:12:38.121821   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 3/120
	I0906 19:12:39.123461   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 4/120
	I0906 19:12:40.125208   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 5/120
	I0906 19:12:41.127104   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 6/120
	I0906 19:12:42.128427   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 7/120
	I0906 19:12:43.129879   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 8/120
	I0906 19:12:44.131519   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 9/120
	I0906 19:12:45.133726   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 10/120
	I0906 19:12:46.135394   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 11/120
	I0906 19:12:47.136901   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 12/120
	I0906 19:12:48.138626   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 13/120
	I0906 19:12:49.140015   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 14/120
	I0906 19:12:50.141917   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 15/120
	I0906 19:12:51.143691   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 16/120
	I0906 19:12:52.145158   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 17/120
	I0906 19:12:53.147645   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 18/120
	I0906 19:12:54.149225   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 19/120
	I0906 19:12:55.151100   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 20/120
	I0906 19:12:56.152600   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 21/120
	I0906 19:12:57.154152   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 22/120
	I0906 19:12:58.155408   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 23/120
	I0906 19:12:59.156891   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 24/120
	I0906 19:13:00.158896   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 25/120
	I0906 19:13:01.160202   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 26/120
	I0906 19:13:02.161652   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 27/120
	I0906 19:13:03.163086   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 28/120
	I0906 19:13:04.164418   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 29/120
	I0906 19:13:05.165932   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 30/120
	I0906 19:13:06.167275   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 31/120
	I0906 19:13:07.168746   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 32/120
	I0906 19:13:08.170272   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 33/120
	I0906 19:13:09.171705   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 34/120
	I0906 19:13:10.173340   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 35/120
	I0906 19:13:11.174798   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 36/120
	I0906 19:13:12.175965   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 37/120
	I0906 19:13:13.177323   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 38/120
	I0906 19:13:14.178576   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 39/120
	I0906 19:13:15.180082   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 40/120
	I0906 19:13:16.181393   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 41/120
	I0906 19:13:17.182562   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 42/120
	I0906 19:13:18.183847   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 43/120
	I0906 19:13:19.185101   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 44/120
	I0906 19:13:20.187206   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 45/120
	I0906 19:13:21.188405   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 46/120
	I0906 19:13:22.189791   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 47/120
	I0906 19:13:23.190954   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 48/120
	I0906 19:13:24.192380   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 49/120
	I0906 19:13:25.194195   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 50/120
	I0906 19:13:26.195483   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 51/120
	I0906 19:13:27.196889   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 52/120
	I0906 19:13:28.198212   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 53/120
	I0906 19:13:29.199579   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 54/120
	I0906 19:13:30.201217   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 55/120
	I0906 19:13:31.202472   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 56/120
	I0906 19:13:32.203741   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 57/120
	I0906 19:13:33.205142   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 58/120
	I0906 19:13:34.206443   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 59/120
	I0906 19:13:35.208264   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 60/120
	I0906 19:13:36.209392   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 61/120
	I0906 19:13:37.210663   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 62/120
	I0906 19:13:38.211924   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 63/120
	I0906 19:13:39.213255   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 64/120
	I0906 19:13:40.215062   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 65/120
	I0906 19:13:41.216363   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 66/120
	I0906 19:13:42.217664   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 67/120
	I0906 19:13:43.218897   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 68/120
	I0906 19:13:44.220277   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 69/120
	I0906 19:13:45.221913   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 70/120
	I0906 19:13:46.223111   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 71/120
	I0906 19:13:47.224363   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 72/120
	I0906 19:13:48.225519   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 73/120
	I0906 19:13:49.226851   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 74/120
	I0906 19:13:50.229032   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 75/120
	I0906 19:13:51.230474   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 76/120
	I0906 19:13:52.231721   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 77/120
	I0906 19:13:53.233176   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 78/120
	I0906 19:13:54.234490   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 79/120
	I0906 19:13:55.236263   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 80/120
	I0906 19:13:56.237802   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 81/120
	I0906 19:13:57.239239   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 82/120
	I0906 19:13:58.240498   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 83/120
	I0906 19:13:59.241939   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 84/120
	I0906 19:14:00.243845   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 85/120
	I0906 19:14:01.245448   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 86/120
	I0906 19:14:02.246973   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 87/120
	I0906 19:14:03.248241   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 88/120
	I0906 19:14:04.249594   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 89/120
	I0906 19:14:05.251261   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 90/120
	I0906 19:14:06.252591   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 91/120
	I0906 19:14:07.254209   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 92/120
	I0906 19:14:08.256193   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 93/120
	I0906 19:14:09.258414   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 94/120
	I0906 19:14:10.260086   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 95/120
	I0906 19:14:11.261634   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 96/120
	I0906 19:14:12.263159   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 97/120
	I0906 19:14:13.264518   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 98/120
	I0906 19:14:14.265950   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 99/120
	I0906 19:14:15.267851   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 100/120
	I0906 19:14:16.269262   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 101/120
	I0906 19:14:17.270633   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 102/120
	I0906 19:14:18.271904   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 103/120
	I0906 19:14:19.273709   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 104/120
	I0906 19:14:20.275106   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 105/120
	I0906 19:14:21.276323   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 106/120
	I0906 19:14:22.277672   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 107/120
	I0906 19:14:23.279067   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 108/120
	I0906 19:14:24.280278   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 109/120
	I0906 19:14:25.281848   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 110/120
	I0906 19:14:26.283298   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 111/120
	I0906 19:14:27.284448   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 112/120
	I0906 19:14:28.286163   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 113/120
	I0906 19:14:29.287440   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 114/120
	I0906 19:14:30.289162   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 115/120
	I0906 19:14:31.290458   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 116/120
	I0906 19:14:32.291686   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 117/120
	I0906 19:14:33.292869   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 118/120
	I0906 19:14:34.294275   34075 main.go:141] libmachine: (ha-313128-m02) Waiting for machine to stop 119/120
	I0906 19:14:35.295263   34075 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0906 19:14:35.295320   34075 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0906 19:14:35.297273   34075 out.go:201] 
	W0906 19:14:35.298678   34075 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0906 19:14:35.298692   34075 out.go:270] * 
	* 
	W0906 19:14:35.301576   34075 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 19:14:35.303682   34075 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-313128 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
E0906 19:14:49.184933   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr: exit status 7 (33.521882913s)

                                                
                                                
-- stdout --
	ha-313128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-313128-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-313128-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:14:35.348371   34517 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:14:35.348622   34517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:14:35.348631   34517 out.go:358] Setting ErrFile to fd 2...
	I0906 19:14:35.348635   34517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:14:35.348815   34517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:14:35.348998   34517 out.go:352] Setting JSON to false
	I0906 19:14:35.349022   34517 mustload.go:65] Loading cluster: ha-313128
	I0906 19:14:35.349141   34517 notify.go:220] Checking for updates...
	I0906 19:14:35.349552   34517 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:14:35.349579   34517 status.go:255] checking status of ha-313128 ...
	I0906 19:14:35.350044   34517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:14:35.350097   34517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:14:35.370125   34517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0906 19:14:35.370511   34517 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:14:35.371099   34517 main.go:141] libmachine: Using API Version  1
	I0906 19:14:35.371120   34517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:14:35.371532   34517 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:14:35.371710   34517 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 19:14:35.373432   34517 status.go:330] ha-313128 host status = "Running" (err=<nil>)
	I0906 19:14:35.373446   34517 host.go:66] Checking if "ha-313128" exists ...
	I0906 19:14:35.373723   34517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:14:35.373756   34517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:14:35.388351   34517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0906 19:14:35.388840   34517 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:14:35.389293   34517 main.go:141] libmachine: Using API Version  1
	I0906 19:14:35.389320   34517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:14:35.389651   34517 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:14:35.389813   34517 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:14:35.392228   34517 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:14:35.392561   34517 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:14:35.392593   34517 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:14:35.392752   34517 host.go:66] Checking if "ha-313128" exists ...
	I0906 19:14:35.393184   34517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:14:35.393223   34517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:14:35.407398   34517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43501
	I0906 19:14:35.407795   34517 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:14:35.408246   34517 main.go:141] libmachine: Using API Version  1
	I0906 19:14:35.408269   34517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:14:35.408602   34517 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:14:35.408761   34517 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:14:35.408965   34517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:14:35.409000   34517 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:14:35.411214   34517 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:14:35.411618   34517 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:14:35.411645   34517 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:14:35.411735   34517 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:14:35.411893   34517 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:14:35.412028   34517 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:14:35.412166   34517 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:14:35.493977   34517 ssh_runner.go:195] Run: systemctl --version
	I0906 19:14:35.500212   34517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:14:35.514558   34517 kubeconfig.go:125] found "ha-313128" server: "https://192.168.39.254:8443"
	I0906 19:14:35.514589   34517 api_server.go:166] Checking apiserver status ...
	I0906 19:14:35.514624   34517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:14:35.534274   34517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5941/cgroup
	W0906 19:14:35.543786   34517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5941/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 19:14:35.543850   34517 ssh_runner.go:195] Run: ls
	I0906 19:14:35.548052   34517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 19:14:40.548809   34517 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 19:14:40.548874   34517 retry.go:31] will retry after 284.522402ms: state is "Stopped"
	I0906 19:14:40.834355   34517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 19:14:45.835587   34517 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0906 19:14:45.835642   34517 retry.go:31] will retry after 256.130507ms: state is "Stopped"
	I0906 19:14:46.092024   34517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 19:14:46.953172   34517 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0906 19:14:46.953225   34517 retry.go:31] will retry after 405.239148ms: state is "Stopped"
	I0906 19:14:47.358748   34517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0906 19:14:50.409107   34517 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0906 19:14:50.409153   34517 status.go:422] ha-313128 apiserver status = Running (err=<nil>)
	I0906 19:14:50.409160   34517 status.go:257] ha-313128 status: &{Name:ha-313128 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:14:50.409212   34517 status.go:255] checking status of ha-313128-m02 ...
	I0906 19:14:50.409538   34517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:14:50.409575   34517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:14:50.424321   34517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45957
	I0906 19:14:50.424736   34517 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:14:50.425294   34517 main.go:141] libmachine: Using API Version  1
	I0906 19:14:50.425323   34517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:14:50.425644   34517 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:14:50.425830   34517 main.go:141] libmachine: (ha-313128-m02) Calling .GetState
	I0906 19:14:50.427614   34517 status.go:330] ha-313128-m02 host status = "Running" (err=<nil>)
	I0906 19:14:50.427633   34517 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 19:14:50.427923   34517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:14:50.427955   34517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:14:50.442986   34517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37005
	I0906 19:14:50.443535   34517 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:14:50.443982   34517 main.go:141] libmachine: Using API Version  1
	I0906 19:14:50.444008   34517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:14:50.444339   34517 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:14:50.444511   34517 main.go:141] libmachine: (ha-313128-m02) Calling .GetIP
	I0906 19:14:50.447374   34517 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:14:50.447830   34517 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 20:02:28 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 19:14:50.447866   34517 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:14:50.447963   34517 host.go:66] Checking if "ha-313128-m02" exists ...
	I0906 19:14:50.448365   34517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:14:50.448413   34517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:14:50.462855   34517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0906 19:14:50.463234   34517 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:14:50.463682   34517 main.go:141] libmachine: Using API Version  1
	I0906 19:14:50.463699   34517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:14:50.464001   34517 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:14:50.464181   34517 main.go:141] libmachine: (ha-313128-m02) Calling .DriverName
	I0906 19:14:50.464390   34517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:14:50.464406   34517 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHHostname
	I0906 19:14:50.466699   34517 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:14:50.467059   34517 main.go:141] libmachine: (ha-313128-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:cf:ee", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 20:02:28 +0000 UTC Type:0 Mac:52:54:00:0d:cf:ee Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:ha-313128-m02 Clientid:01:52:54:00:0d:cf:ee}
	I0906 19:14:50.467088   34517 main.go:141] libmachine: (ha-313128-m02) DBG | domain ha-313128-m02 has defined IP address 192.168.39.32 and MAC address 52:54:00:0d:cf:ee in network mk-ha-313128
	I0906 19:14:50.467200   34517 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHPort
	I0906 19:14:50.467338   34517 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHKeyPath
	I0906 19:14:50.467475   34517 main.go:141] libmachine: (ha-313128-m02) Calling .GetSSHUsername
	I0906 19:14:50.467580   34517 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128-m02/id_rsa Username:docker}
	W0906 19:15:08.809075   34517 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.32:22: connect: no route to host
	W0906 19:15:08.809169   34517 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	E0906 19:15:08.809184   34517 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 19:15:08.809191   34517 status.go:257] ha-313128-m02 status: &{Name:ha-313128-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0906 19:15:08.809219   34517 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.32:22: connect: no route to host
	I0906 19:15:08.809227   34517 status.go:255] checking status of ha-313128-m04 ...
	I0906 19:15:08.809520   34517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:15:08.809560   34517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:15:08.824243   34517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0906 19:15:08.824697   34517 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:15:08.825166   34517 main.go:141] libmachine: Using API Version  1
	I0906 19:15:08.825186   34517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:15:08.825520   34517 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:15:08.825684   34517 main.go:141] libmachine: (ha-313128-m04) Calling .GetState
	I0906 19:15:08.827044   34517 status.go:330] ha-313128-m04 host status = "Stopped" (err=<nil>)
	I0906 19:15:08.827057   34517 status.go:343] host is not running, skipping remaining checks
	I0906 19:15:08.827062   34517 status.go:257] ha-313128-m04 status: &{Name:ha-313128-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr": ha-313128
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-313128-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-313128-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr": ha-313128
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-313128-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-313128-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr": ha-313128
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-313128-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-313128-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-313128 -n ha-313128
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-313128 -n ha-313128: exit status 2 (15.60117979s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-313128 logs -n 25: (1.455154855s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m04 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp testdata/cp-test.txt                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128:/home/docker/cp-test_ha-313128-m04_ha-313128.txt                       |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128 sudo cat                                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128.txt                                 |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m02:/home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m02 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m03:/home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n                                                                 | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | ha-313128-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-313128 ssh -n ha-313128-m03 sudo cat                                          | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC | 06 Sep 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-313128 node stop m02 -v=7                                                     | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-313128 node start m02 -v=7                                                    | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-313128 -v=7                                                           | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-313128 -v=7                                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-313128 --wait=true -v=7                                                    | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 19:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-313128                                                                | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 19:12 UTC |                     |
	| node    | ha-313128 node delete m03 -v=7                                                   | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 19:12 UTC | 06 Sep 24 19:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-313128 stop -v=7                                                              | ha-313128 | jenkins | v1.34.0 | 06 Sep 24 19:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 19:00:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:00:42.604662   30973 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:00:42.604922   30973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:42.604931   30973 out.go:358] Setting ErrFile to fd 2...
	I0906 19:00:42.604937   30973 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:42.605118   30973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:00:42.605712   30973 out.go:352] Setting JSON to false
	I0906 19:00:42.606606   30973 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2592,"bootTime":1725646651,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:00:42.606669   30973 start.go:139] virtualization: kvm guest
	I0906 19:00:42.609026   30973 out.go:177] * [ha-313128] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:00:42.610315   30973 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:00:42.610320   30973 notify.go:220] Checking for updates...
	I0906 19:00:42.612626   30973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:00:42.614046   30973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:00:42.615697   30973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:00:42.617289   30973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:00:42.618880   30973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:00:42.620642   30973 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:00:42.620737   30973 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:00:42.621181   30973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:00:42.621247   30973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:42.636849   30973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0906 19:00:42.637263   30973 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:42.637848   30973 main.go:141] libmachine: Using API Version  1
	I0906 19:00:42.637868   30973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:42.638214   30973 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:42.638435   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.676963   30973 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 19:00:42.678406   30973 start.go:297] selected driver: kvm2
	I0906 19:00:42.678423   30973 start.go:901] validating driver "kvm2" against &{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default A
PIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headl
amp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:00:42.678622   30973 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:00:42.678996   30973 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:00:42.679070   30973 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:00:42.694855   30973 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:00:42.695667   30973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:00:42.695733   30973 cni.go:84] Creating CNI manager for ""
	I0906 19:00:42.695746   30973 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 19:00:42.695799   30973 start.go:340] cluster config:
	{Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:00:42.695915   30973 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:00:42.698090   30973 out.go:177] * Starting "ha-313128" primary control-plane node in "ha-313128" cluster
	I0906 19:00:42.699706   30973 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:00:42.699746   30973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:00:42.699754   30973 cache.go:56] Caching tarball of preloaded images
	I0906 19:00:42.699837   30973 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:00:42.699848   30973 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:00:42.699961   30973 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/config.json ...
	I0906 19:00:42.700160   30973 start.go:360] acquireMachinesLock for ha-313128: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:00:42.700217   30973 start.go:364] duration metric: took 31.95µs to acquireMachinesLock for "ha-313128"
	I0906 19:00:42.700243   30973 start.go:96] Skipping create...Using existing machine configuration
	I0906 19:00:42.700253   30973 fix.go:54] fixHost starting: 
	I0906 19:00:42.700615   30973 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:00:42.700669   30973 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:00:42.715246   30973 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0906 19:00:42.715721   30973 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:00:42.716296   30973 main.go:141] libmachine: Using API Version  1
	I0906 19:00:42.716319   30973 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:00:42.716656   30973 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:00:42.716872   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.717048   30973 main.go:141] libmachine: (ha-313128) Calling .GetState
	I0906 19:00:42.718801   30973 fix.go:112] recreateIfNeeded on ha-313128: state=Running err=<nil>
	W0906 19:00:42.718818   30973 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 19:00:42.722091   30973 out.go:177] * Updating the running kvm2 "ha-313128" VM ...
	I0906 19:00:42.723320   30973 machine.go:93] provisionDockerMachine start ...
	I0906 19:00:42.723341   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:00:42.723593   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.726581   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.727062   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.727086   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.727274   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.727450   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.727600   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.727717   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.727841   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.728035   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.728049   30973 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 19:00:42.842622   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 19:00:42.842652   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:42.842912   30973 buildroot.go:166] provisioning hostname "ha-313128"
	I0906 19:00:42.842943   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:42.843128   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.845900   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.846338   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.846367   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.846533   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.846705   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.846862   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.846998   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.847138   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.847339   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.847355   30973 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-313128 && echo "ha-313128" | sudo tee /etc/hostname
	I0906 19:00:42.971699   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-313128
	
	I0906 19:00:42.971726   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:42.974199   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.974577   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:42.974616   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:42.974777   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:42.974955   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.975110   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:42.975250   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:42.975389   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:42.975547   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:42.975561   30973 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-313128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-313128/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-313128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:00:43.086298   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:00:43.086336   30973 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:00:43.086383   30973 buildroot.go:174] setting up certificates
	I0906 19:00:43.086397   30973 provision.go:84] configureAuth start
	I0906 19:00:43.086411   30973 main.go:141] libmachine: (ha-313128) Calling .GetMachineName
	I0906 19:00:43.086768   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:00:43.089761   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.090172   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.090221   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.090371   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.092707   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.093131   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.093150   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.093281   30973 provision.go:143] copyHostCerts
	I0906 19:00:43.093308   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:00:43.093346   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:00:43.093371   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:00:43.093449   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:00:43.093549   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:00:43.093574   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:00:43.093581   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:00:43.093618   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:00:43.093687   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:00:43.093709   30973 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:00:43.093714   30973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:00:43.093750   30973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:00:43.093833   30973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.ha-313128 san=[127.0.0.1 192.168.39.70 ha-313128 localhost minikube]
	I0906 19:00:43.258285   30973 provision.go:177] copyRemoteCerts
	I0906 19:00:43.258366   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:00:43.258394   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.260947   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.261383   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.261412   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.261600   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:43.261791   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.261926   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:43.262075   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:00:43.348224   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 19:00:43.348285   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:00:43.374716   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 19:00:43.374792   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0906 19:00:43.403028   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 19:00:43.403095   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 19:00:43.428263   30973 provision.go:87] duration metric: took 341.855389ms to configureAuth
	I0906 19:00:43.428293   30973 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:00:43.428524   30973 config.go:182] Loaded profile config "ha-313128": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:00:43.428598   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:00:43.431629   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.432063   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:00:43.432090   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:00:43.432269   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:00:43.432477   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.432645   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:00:43.432802   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:00:43.432969   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:00:43.433127   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:00:43.433144   30973 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:02:14.266261   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:02:14.266292   30973 machine.go:96] duration metric: took 1m31.542957549s to provisionDockerMachine
	I0906 19:02:14.266304   30973 start.go:293] postStartSetup for "ha-313128" (driver="kvm2")
	I0906 19:02:14.266315   30973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:02:14.266329   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.266669   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:02:14.266694   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.270021   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.270486   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.270511   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.270640   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.270873   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.271053   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.271182   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.357410   30973 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:02:14.362343   30973 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:02:14.362367   30973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:02:14.362428   30973 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:02:14.362506   30973 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:02:14.362518   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 19:02:14.362611   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:02:14.372770   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:02:14.400357   30973 start.go:296] duration metric: took 134.040576ms for postStartSetup
	I0906 19:02:14.400419   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.400730   30973 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0906 19:02:14.400755   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.403411   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.403817   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.403842   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.403988   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.404164   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.404325   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.404472   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	W0906 19:02:14.487375   30973 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0906 19:02:14.487427   30973 fix.go:56] duration metric: took 1m31.787174067s for fixHost
	I0906 19:02:14.487448   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.490126   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.490510   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.490541   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.490726   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.490930   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.491084   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.491223   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.491366   30973 main.go:141] libmachine: Using SSH client type: native
	I0906 19:02:14.491537   30973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I0906 19:02:14.491547   30973 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:02:14.598045   30973 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725649334.553360444
	
	I0906 19:02:14.598070   30973 fix.go:216] guest clock: 1725649334.553360444
	I0906 19:02:14.598077   30973 fix.go:229] Guest: 2024-09-06 19:02:14.553360444 +0000 UTC Remote: 2024-09-06 19:02:14.487433708 +0000 UTC m=+91.917728709 (delta=65.926736ms)
	I0906 19:02:14.598105   30973 fix.go:200] guest clock delta is within tolerance: 65.926736ms
	I0906 19:02:14.598121   30973 start.go:83] releasing machines lock for "ha-313128", held for 1m31.897881945s
	I0906 19:02:14.598147   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.598410   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:02:14.600993   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.601335   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.601359   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.601535   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602064   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602246   30973 main.go:141] libmachine: (ha-313128) Calling .DriverName
	I0906 19:02:14.602360   30973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:02:14.602395   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.602490   30973 ssh_runner.go:195] Run: cat /version.json
	I0906 19:02:14.602505   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHHostname
	I0906 19:02:14.605042   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605172   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605395   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.605418   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605547   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.605652   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:14.605677   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:14.605689   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.605801   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHPort
	I0906 19:02:14.605856   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.605923   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHKeyPath
	I0906 19:02:14.606008   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.606047   30973 main.go:141] libmachine: (ha-313128) Calling .GetSSHUsername
	I0906 19:02:14.606191   30973 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/ha-313128/id_rsa Username:docker}
	I0906 19:02:14.682320   30973 ssh_runner.go:195] Run: systemctl --version
	I0906 19:02:14.707871   30973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:02:14.868709   30973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:02:14.878107   30973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:02:14.878182   30973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:02:14.887795   30973 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 19:02:14.887825   30973 start.go:495] detecting cgroup driver to use...
	I0906 19:02:14.887900   30973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:02:14.905023   30973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:02:14.920380   30973 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:02:14.920478   30973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:02:14.936661   30973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:02:14.951264   30973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:02:15.102677   30973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:02:15.248271   30973 docker.go:233] disabling docker service ...
	I0906 19:02:15.248331   30973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:02:15.264423   30973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:02:15.278696   30973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:02:15.426846   30973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:02:15.574956   30973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:02:15.589843   30973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:02:15.609432   30973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 19:02:15.609504   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.620399   30973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:02:15.620463   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.630897   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.641484   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.651945   30973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:02:15.663429   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.674521   30973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.689183   30973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:02:15.700177   30973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:02:15.710433   30973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:02:15.720027   30973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:02:15.864474   30973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:02:16.100883   30973 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:02:16.100949   30973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:02:16.106267   30973 start.go:563] Will wait 60s for crictl version
	I0906 19:02:16.106339   30973 ssh_runner.go:195] Run: which crictl
	I0906 19:02:16.110880   30973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:02:16.149993   30973 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:02:16.150090   30973 ssh_runner.go:195] Run: crio --version
	I0906 19:02:16.181738   30973 ssh_runner.go:195] Run: crio --version
	I0906 19:02:16.215139   30973 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 19:02:16.216581   30973 main.go:141] libmachine: (ha-313128) Calling .GetIP
	I0906 19:02:16.219061   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:16.219402   30973 main.go:141] libmachine: (ha-313128) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:5d:d2", ip: ""} in network mk-ha-313128: {Iface:virbr1 ExpiryTime:2024-09-06 19:50:56 +0000 UTC Type:0 Mac:52:54:00:e1:5d:d2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-313128 Clientid:01:52:54:00:e1:5d:d2}
	I0906 19:02:16.219431   30973 main.go:141] libmachine: (ha-313128) DBG | domain ha-313128 has defined IP address 192.168.39.70 and MAC address 52:54:00:e1:5d:d2 in network mk-ha-313128
	I0906 19:02:16.219550   30973 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 19:02:16.224692   30973 kubeadm.go:883] updating cluster {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:1
92.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:02:16.224825   30973 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:02:16.224887   30973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:02:16.279712   30973 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:02:16.279734   30973 crio.go:433] Images already preloaded, skipping extraction
	I0906 19:02:16.279784   30973 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:02:16.314787   30973 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:02:16.314818   30973 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:02:16.314830   30973 kubeadm.go:934] updating node { 192.168.39.70 8443 v1.31.0 crio true true} ...
	I0906 19:02:16.314943   30973 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-313128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:02:16.315021   30973 ssh_runner.go:195] Run: crio config
	I0906 19:02:16.364038   30973 cni.go:84] Creating CNI manager for ""
	I0906 19:02:16.364072   30973 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0906 19:02:16.364092   30973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:02:16.364128   30973 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-313128 NodeName:ha-313128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:02:16.364353   30973 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-313128"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:02:16.364385   30973 kube-vip.go:115] generating kube-vip config ...
	I0906 19:02:16.364438   30973 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0906 19:02:16.376810   30973 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0906 19:02:16.376947   30973 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0906 19:02:16.377010   30973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 19:02:16.386554   30973 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:02:16.386654   30973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0906 19:02:16.396282   30973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0906 19:02:16.413426   30973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:02:16.430809   30973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0906 19:02:16.447378   30973 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0906 19:02:16.464060   30973 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0906 19:02:16.469045   30973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:02:16.610775   30973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:02:16.625535   30973 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128 for IP: 192.168.39.70
	I0906 19:02:16.625562   30973 certs.go:194] generating shared ca certs ...
	I0906 19:02:16.625577   30973 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.625717   30973 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:02:16.625753   30973 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:02:16.625762   30973 certs.go:256] generating profile certs ...
	I0906 19:02:16.625841   30973 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/client.key
	I0906 19:02:16.625866   30973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c
	I0906 19:02:16.625879   30973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.70 192.168.39.32 192.168.39.172 192.168.39.254]
	I0906 19:02:16.804798   30973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c ...
	I0906 19:02:16.804827   30973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c: {Name:mkbad82bfe626c7b530e91f2fb1afe292d0ae161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.805001   30973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c ...
	I0906 19:02:16.805015   30973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c: {Name:mk0ae7f160e2379f6800fc471c87e5a6b8b93da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:02:16.805088   30973 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt.5e9eb73c -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt
	I0906 19:02:16.805220   30973 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key.5e9eb73c -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key
	I0906 19:02:16.805349   30973 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key
	I0906 19:02:16.805363   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 19:02:16.805378   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 19:02:16.805391   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 19:02:16.805424   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 19:02:16.805440   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 19:02:16.805451   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 19:02:16.805460   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 19:02:16.805469   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 19:02:16.805512   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:02:16.805541   30973 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:02:16.805551   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:02:16.805578   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:02:16.805605   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:02:16.805628   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:02:16.805663   30973 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:02:16.805690   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 19:02:16.805703   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:16.805716   30973 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 19:02:16.806296   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:02:16.832409   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:02:16.856617   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:02:16.883121   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:02:16.908841   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 19:02:16.934050   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 19:02:16.957637   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:02:16.982352   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/ha-313128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 19:02:17.007984   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:02:17.034211   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:02:17.058444   30973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:02:17.082266   30973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:02:17.099732   30973 ssh_runner.go:195] Run: openssl version
	I0906 19:02:17.105835   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:02:17.117417   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.122102   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.122167   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:02:17.127926   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:02:17.137341   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:02:17.147895   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.152327   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.152384   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:02:17.158147   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:02:17.167715   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:02:17.179028   30973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.183445   30973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.183521   30973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:02:17.189253   30973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:02:17.198545   30973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:02:17.203152   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 19:02:17.208885   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 19:02:17.214536   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 19:02:17.220261   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 19:02:17.226142   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 19:02:17.231663   30973 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 19:02:17.237142   30973 kubeadm.go:392] StartCluster: {Name:ha-313128 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-313128 Namespace:default APIServerHAVIP:192.
168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm
-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:02:17.237264   30973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:02:17.237316   30973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:02:17.274034   30973 cri.go:89] found id: "9103596edb635c85d04deccce75e13f1cd3262538a222b30a0c94e764770d28c"
	I0906 19:02:17.274063   30973 cri.go:89] found id: "15aafcfc8e779931ee6d9a42dd1aab5a06c3de9f67ec6b3feb49305eed4103e0"
	I0906 19:02:17.274069   30973 cri.go:89] found id: "8fa4e79af67df589d61af4ab106d80e16d119e6feed8deff5827505fa804474c"
	I0906 19:02:17.274074   30973 cri.go:89] found id: "5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939"
	I0906 19:02:17.274078   30973 cri.go:89] found id: "ffd27ffbc9742588787d06e0f28f46a237db037f1befc44f79f6dda70439ad8d"
	I0906 19:02:17.274083   30973 cri.go:89] found id: "76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa"
	I0906 19:02:17.274087   30973 cri.go:89] found id: "76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b"
	I0906 19:02:17.274091   30973 cri.go:89] found id: "135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1"
	I0906 19:02:17.274095   30973 cri.go:89] found id: "13b08e833a9ce43e2a9e93f9e4d6d29e8fd2995b6f9220c0d6d7380ecd6edf9d"
	I0906 19:02:17.274104   30973 cri.go:89] found id: "7f7c5c81b9e0552eeef3ac141c4328cb0d01d3a5aca9e22618604d55f00dbd0f"
	I0906 19:02:17.274108   30973 cri.go:89] found id: "9a30d709b3b927a606d6b3902c4da0e1dcf9c09280294061d1d4f58b15d2a387"
	I0906 19:02:17.274112   30973 cri.go:89] found id: "e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8"
	I0906 19:02:17.274116   30973 cri.go:89] found id: "a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f"
	I0906 19:02:17.274121   30973 cri.go:89] found id: ""
	I0906 19:02:17.274164   30973 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.872946066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725650124872909754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2683c58-a8f6-4e5e-8dbe-6d7ec55293fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.873770959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b437f29a-3762-4c6d-a4e2-4b630c0293ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.873873075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b437f29a-3762-4c6d-a4e2-4b630c0293ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.875222452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78aafa2222cb34f7484f1189f1e14efe6a66294464a77ccd135d665024e833ea,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650067766077933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9978ee1521fe9ce02efd2499ae4da45efc68645f43bdb4b7580b68aa6c2638,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725650014479257115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117b
7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85e
be5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pr
otocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-
bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af
3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b437f29a-3762-4c6d-a4e2-4b630c0293ad name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.889373855Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c280e0f5-73d2-48b2-bcf8-18636b44cc10 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.890034293Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-s2cgz,Uid:ea1b3998-c924-47a2-a321-bd8f20ed324e,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649376621609740,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:54:05.088814193Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-313128,Uid:f6d46474fdf3e5977e60eb17ada4e349,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1725649356168399514,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{kubernetes.io/config.hash: f6d46474fdf3e5977e60eb17ada4e349,kubernetes.io/config.seen: 2024-09-06T19:02:16.420886713Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-gccvh,Uid:9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649343007117605,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-06T18:51:43.928140026Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&PodSandboxMetadata{Name:etcd-ha-313128,Uid:9cddf482287bf3b2dbb1236f43dc96c3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342953129237,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.70:2379,kubernetes.io/config.hash: 9cddf482287bf3b2dbb1236f43dc96c3,kubernetes.io/config.seen: 2024-09-06T18:51:25.375047261Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-313128,Uid:5971d16b859a22cc0a378921d7577d4a,Namespace
:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342939628598,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5971d16b859a22cc0a378921d7577d4a,kubernetes.io/config.seen: 2024-09-06T18:51:25.375053288Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&PodSandboxMetadata{Name:kindnet-h2trt,Uid:90af3550-1fae-46bd-9329-f185fcdb23c6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342931311678,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fc
db23c6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:29.831601797Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-313128,Uid:1f52c5565007a9e3852323973b3197bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342880954362,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1f52c5565007a9e3852323973b3197bc,kubernetes.io/config.seen: 2024-09-06T18:51:25.375052130Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d59
60,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6c957eac-7904-4c39-b858-bfb7da32c75c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342875853842,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/t
mp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-06T18:51:43.943423552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-313128,Uid:19f5824a415bb48f2bb6ab3144efbec6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342869057108,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.70:8443,kubernetes.io/config.hash: 19f5824a415bb48f2bb6ab3144efbec6,kubernetes.io/config.seen: 2024-09-06T1
8:51:25.375050957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&PodSandboxMetadata{Name:kube-proxy-h5xn7,Uid:e45358c5-398e-4d33-9bd0-a4f28ce17ac9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342854024488,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:29.825007552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-gk28z,Uid:ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649337900654122,Lab
els:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:43.938411060Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-s2cgz,Uid:ea1b3998-c924-47a2-a321-bd8f20ed324e,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648845415388093,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:54:05.088814193Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-gk28z,Uid:ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648704255562292,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:43.938411060Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-gccvh,Uid:9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648704235684853,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:43.928140026Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&PodSandboxMetadata{Name:kube-proxy-h5xn7,Uid:e45358c5-398e-4d33-9bd0-a4f28ce17ac9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648690148016126,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:29.825007552Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Po
dSandbox{Id:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&PodSandboxMetadata{Name:kindnet-h2trt,Uid:90af3550-1fae-46bd-9329-f185fcdb23c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648690143296092,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T18:51:29.831601797Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-313128,Uid:5971d16b859a22cc0a378921d7577d4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648678770402611,Labels:map[string]string{component: kube-scheduler,io.kubernet
es.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5971d16b859a22cc0a378921d7577d4a,kubernetes.io/config.seen: 2024-09-06T18:51:18.311933771Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&PodSandboxMetadata{Name:etcd-ha-313128,Uid:9cddf482287bf3b2dbb1236f43dc96c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725648678755469606,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.70:2379,kubernetes.io/config.hash: 9cddf482287
bf3b2dbb1236f43dc96c3,kubernetes.io/config.seen: 2024-09-06T18:51:18.311927690Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c280e0f5-73d2-48b2-bcf8-18636b44cc10 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.891251471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e4dcb02-ca99-4d40-aaea-040f0e673b12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.891341661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e4dcb02-ca99-4d40-aaea-040f0e673b12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.891985644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78aafa2222cb34f7484f1189f1e14efe6a66294464a77ccd135d665024e833ea,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650067766077933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9978ee1521fe9ce02efd2499ae4da45efc68645f43bdb4b7580b68aa6c2638,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725650014479257115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117b
7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85e
be5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pr
otocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-
bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af
3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e4dcb02-ca99-4d40-aaea-040f0e673b12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.893301482Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},},}" file="otel-collector/interceptors.go:62" id=1abbb022-894b-4177-8165-e994dc5827dd name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.893443454Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-313128,Uid:19f5824a415bb48f2bb6ab3144efbec6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342869057108,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.70:8443,kubernetes.io/config.hash: 19f5824a415bb48f2bb6ab3144efbec6,kubernetes.io/config.seen: 2024-09-06T18:51:25.375050957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1abbb022-894b-4177-8
165-e994dc5827dd name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.894115085Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5afa1e6e-b7d0-41cd-9ef9-291aa792be6b name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.894255852Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-313128,Uid:19f5824a415bb48f2bb6ab3144efbec6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725649342869057108,Network:&PodSandboxNetworkStatus{Ip:,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.70:8443,kubernetes.io/config.hash: 19f5824a41
5bb48f2bb6ab3144efbec6,kubernetes.io/config.seen: 2024-09-06T18:51:25.375050957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=5afa1e6e-b7d0-41cd-9ef9-291aa792be6b name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.894959667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},},}" file="otel-collector/interceptors.go:62" id=4766b594-b9c4-4d7f-bcb0-59abbd78a4f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.895065675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4766b594-b9c4-4d7f-bcb0-59abbd78a4f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.895166814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78aafa2222cb34f7484f1189f1e14efe6a66294464a77ccd135d665024e833ea,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650067766077933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4766b594-b9c4-4d7f-bcb0-59abbd78a4f1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.895723145Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:78aafa2222cb34f7484f1189f1e14efe6a66294464a77ccd135d665024e833ea,Verbose:false,}" file="otel-collector/interceptors.go:62" id=c79a43c3-fff1-4468-a0c5-5a27b34d5c9a name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.895889015Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:78aafa2222cb34f7484f1189f1e14efe6a66294464a77ccd135d665024e833ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},State:CONTAINER_EXITED,CreatedAt:1725650067811799445,StartedAt:1725650067868646781,FinishedAt:1725650123366825743,ExitCode:255,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.31.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Reason:Error,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/19f5824a415bb48f2bb6ab3144efbec6/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/19f5824a415bb48f2bb6ab3144efbec6/containers/kube-apiserver/ccb850ad,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Moun
t{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-ha-313128_19f5824a415bb48f2bb6ab3144efbec6/kube-apiserver/4.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c79a43c3-fff1-4468-a0c5-5a27b34d5c9a name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.933765457Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=de3992cb-bf14-4d61-8f5f-54c86bd84092 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.933877111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=de3992cb-bf14-4d61-8f5f-54c86bd84092 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.935863893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db17c576-11cd-43af-857d-d606d0d0854a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.936442572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725650124936414581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db17c576-11cd-43af-857d-d606d0d0854a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.937361602Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abe36bc5-2536-4744-844a-91e3f567fe04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.937464219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abe36bc5-2536-4744-844a-91e3f567fe04 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:15:24 ha-313128 crio[3609]: time="2024-09-06 19:15:24.938238399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:78aafa2222cb34f7484f1189f1e14efe6a66294464a77ccd135d665024e833ea,PodSandboxId:3ae5e99906a2e7d60b2f9b8c473b4fe3c1a7c17f646e8526c417f5ed64d78285,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650067766077933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5824a415bb48f2bb6ab3144efbec6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9978ee1521fe9ce02efd2499ae4da45efc68645f43bdb4b7580b68aa6c2638,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725650014479257115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563331b1df56b8b5795b2c9175f1a62d59b65793e791e6a96e6b69f98e5b5688,PodSandboxId:a0a256d64c27fe553debc9cee7795d2165efa2f137c4db5072854172322d5960,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725649412490122211,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c957eac-7904-4c39-b858-bfb7da32c75c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725649381499760921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3f5b10c63caf9a31dd10d5ffe3bba45881f14483e9183b8849e03d3b4ffbf3,PodSandboxId:7cbf701e90a6fbb3a9fd67873d4e5eda16366d8c9e18d7e8d518b5717ebd683e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725649376779585755,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a170bb1c8a3cbe782ab565e77d0d165ee507e63ed9117697c30ea2e8ea804124,PodSandboxId:9a8c2a564ace31012c052944782605e249bec8d4ad6b26e6f8f1b633cdc04f51,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1725649356266036050,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6d46474fdf3e5977e60eb17ada4e349,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0,PodSandboxId:419150e9a53e3c37c3ac0fc401ba5cdf998dbcb1ecba7c97bc45a2f09f226bff,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725649343663285617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9,PodSandboxId:64b8d66092688a7a7fe54ddfba6ef12e68ce610fff1d8088f626ef8136af54b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725649343587012783,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{
Id:36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a,PodSandboxId:54824bb3087ee24f363f6af33a4c19b57a3880bc25d71eb04c2d3c9d98bb510f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725649343542753406,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fab375b2e00c6c1c477e49d20575c282cf15631db08117b
7cbd6669002057a7,PodSandboxId:4c7e7fc7137a01868b6032966c97e5ab0993219f603e9954d84e39d0c5fd2377,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725649343511188051,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f52c5565007a9e3852323973b3197bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85e
be5629edada2adae88766,PodSandboxId:d481cfc1806b6272b538bf223421e03ee8190a6608ae80756ce1ab3ab6f509d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649343349708836,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b,PodSandboxId:e453276f34782cdb061fd154f3df9d3e0c690deb9f81a215bf9317fbbea70652,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725649343215759160,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c,PodSandboxId:7356e11979968d7ab6d8b00ef92811649e7bb9bd22843ca81cdf88b5275b3f28,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725649338049395022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pr
otocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3f2cd2f6c9c824c0810921ac77bc37ac93f8c3dbfa044debf7e0c16d409178,PodSandboxId:74b84ec8f17a736622a48f128b74fd25fb83d4642d140d87a9aedc0f7002a79a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725648847674944075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-s2cgz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea1b3998-c924-47a2-a321-
bd8f20ed324e,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939,PodSandboxId:9151daea570f33f4d8431540305a5987599db668e67f38f3ba7ae9f655cb2711,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704565923271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gk28z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab595ef6-eaa8-44a0-bdad-ddd59c8d052d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa,PodSandboxId:8449d8c8bfa3ec5a36d30a23e123c54964eb50d5597ac23c252d858bca086c0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725648704439976509,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gccvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b7c0e1a-3359-4f9f-826c-b75cbdfcd500,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b,PodSandboxId:a3128d8e090be9c2fe087989f856b99f1cdacde879762261bec27fd4a050b9fe,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Ann
otations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725648692553283606,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h2trt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90af3550-1fae-46bd-9329-f185fcdb23c6,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1,PodSandboxId:dde7791c0770ac5cc0d9e200e1f09cbef2a9b9546421ffcc012ad33aed852d62,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpe
cifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725648690396337408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5xn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e45358c5-398e-4d33-9bd0-a4f28ce17ac9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f,PodSandboxId:aeb85ed29ab1dd204129aca92a0d23cb5dc439d970aed943f6f98c9cf74768c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725648678969423150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5971d16b859a22cc0a378921d7577d4a,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8,PodSandboxId:0ced27e2ded46491c60036d1ea06a36e89ff0bad469078986f6d94e87a4ae9af,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af
3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725648678980096124,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-313128,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cddf482287bf3b2dbb1236f43dc96c3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abe36bc5-2536-4744-844a-91e3f567fe04 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	78aafa2222cb3       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      57 seconds ago       Exited              kube-apiserver            4                   3ae5e99906a2e       kube-apiserver-ha-313128
	ce9978ee1521f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   a0a256d64c27f       storage-provisioner
	563331b1df56b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago       Exited              storage-provisioner       4                   a0a256d64c27f       storage-provisioner
	1cfd32c774caf       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      12 minutes ago       Running             kube-controller-manager   2                   4c7e7fc7137a0       kube-controller-manager-ha-313128
	9d3f5b10c63ca       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      12 minutes ago       Running             busybox                   1                   7cbf701e90a6f       busybox-7dff88458-s2cgz
	a170bb1c8a3cb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      12 minutes ago       Running             kube-vip                  0                   9a8c2a564ace3       kube-vip-ha-313128
	d3e14bee704aa       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Running             kindnet-cni               1                   419150e9a53e3       kindnet-h2trt
	bea01e33385d8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago       Running             kube-scheduler            1                   64b8d66092688       kube-scheduler-ha-313128
	36d954de08dab       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Running             etcd                      1                   54824bb3087ee       etcd-ha-313128
	7fab375b2e00c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago       Exited              kube-controller-manager   1                   4c7e7fc7137a0       kube-controller-manager-ha-313128
	25ee04d39c4c9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Running             coredns                   1                   d481cfc1806b6       coredns-6f6b679f8f-gccvh
	77c80de1adc0a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Running             kube-proxy                1                   e453276f34782       kube-proxy-h5xn7
	f78069cd2a935       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Running             coredns                   1                   7356e11979968       coredns-6f6b679f8f-gk28z
	7b3f2cd2f6c9c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   21 minutes ago       Exited              busybox                   0                   74b84ec8f17a7       busybox-7dff88458-s2cgz
	5b950806bc4b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      23 minutes ago       Exited              coredns                   0                   9151daea570f3       coredns-6f6b679f8f-gk28z
	76bbd732b8695       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      23 minutes ago       Exited              coredns                   0                   8449d8c8bfa3e       coredns-6f6b679f8f-gccvh
	76ca94f153009       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    23 minutes ago       Exited              kindnet-cni               0                   a3128d8e090be       kindnet-h2trt
	135074e446370       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      23 minutes ago       Exited              kube-proxy                0                   dde7791c0770a       kube-proxy-h5xn7
	e32b22b9f83ac       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      24 minutes ago       Exited              etcd                      0                   0ced27e2ded46       etcd-ha-313128
	a406aeec43303       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      24 minutes ago       Exited              kube-scheduler            0                   aeb85ed29ab1d       kube-scheduler-ha-313128
	
	
	==> coredns [25ee04d39c4c9db2ff0821a1dc65a49cfad785bb85ebe5629edada2adae88766] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[2028874168]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:14:49.789) (total time: 12485ms):
	Trace[2028874168]: ---"Objects listed" error:Unauthorized 12484ms (19:15:02.274)
	Trace[2028874168]: [12.485011222s] [12.485011222s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[518556850]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:14:51.317) (total time: 10956ms):
	Trace[518556850]: ---"Objects listed" error:Unauthorized 10956ms (19:15:02.274)
	Trace[518556850]: [10.95672225s] [10.95672225s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1832299381]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:15:05.312) (total time: 10961ms):
	Trace[1832299381]: ---"Objects listed" error:Unauthorized 10961ms (19:15:16.273)
	Trace[1832299381]: [10.961882297s] [10.961882297s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1829380266]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:15:04.269) (total time: 12005ms):
	Trace[1829380266]: ---"Objects listed" error:Unauthorized 12004ms (19:15:16.274)
	Trace[1829380266]: [12.005033407s] [12.005033407s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3615": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3615": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [5b950806bc4b9adc0f8e59a4f415683cd7f8ac70af5aff8721c18e6cdb426939] <==
	[INFO] 10.244.0.4:42561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009569s
	[INFO] 10.244.0.4:55114 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086084s
	[INFO] 10.244.0.4:53953 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067022s
	[INFO] 10.244.1.2:48594 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121564s
	[INFO] 10.244.1.2:53114 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166914s
	[INFO] 10.244.2.2:34659 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158468s
	[INFO] 10.244.2.2:34171 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000176512s
	[INFO] 10.244.0.4:58990 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009694s
	[INFO] 10.244.0.4:43562 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118003s
	[INFO] 10.244.0.4:33609 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086781s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1840&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1840&timeout=6m57s&timeoutSeconds=417&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[442499001]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.006) (total time: 13786ms):
	Trace[442499001]: ---"Objects listed" error:Unauthorized 13786ms (19:00:41.792)
	Trace[442499001]: [13.786231314s] [13.786231314s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[85447720]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.308) (total time: 13485ms):
	Trace[85447720]: ---"Objects listed" error:Unauthorized 13484ms (19:00:41.792)
	Trace[85447720]: [13.485399749s] [13.485399749s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [76bbd732b869589a844e9d63cd473e8f2972e7db008eb920e6301168c2a072aa] <==
	[INFO] 10.244.1.2:35244 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000089298s
	[INFO] 10.244.1.2:54461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083864s
	[INFO] 10.244.2.2:46046 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126212s
	[INFO] 10.244.2.2:45762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078805s
	[INFO] 10.244.0.4:56166 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109081s
	[INFO] 10.244.1.2:44485 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175559s
	[INFO] 10.244.1.2:60331 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113433s
	[INFO] 10.244.2.2:33944 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094759s
	[INFO] 10.244.2.2:54249 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007626s
	[INFO] 10.244.0.4:34049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000091783s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1840&timeout=6m52s&timeoutSeconds=412&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1840&timeout=9m31s&timeoutSeconds=571&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1362283421]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.166) (total time: 13625ms):
	Trace[1362283421]: ---"Objects listed" error:Unauthorized 13625ms (19:00:41.791)
	Trace[1362283421]: [13.625497855s] [13.625497855s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[2000776186]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:00:28.018) (total time: 13773ms):
	Trace[2000776186]: ---"Objects listed" error:Unauthorized 13773ms (19:00:41.792)
	Trace[2000776186]: [13.773675488s] [13.773675488s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f78069cd2a9356ea9008a86b3da74a235c5aadc691bf5f356fc2cb9f51650d1c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1922901699]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:14:49.815) (total time: 12458ms):
	Trace[1922901699]: ---"Objects listed" error:Unauthorized 12458ms (19:15:02.273)
	Trace[1922901699]: [12.458102139s] [12.458102139s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1864890241]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Sep-2024 19:14:49.457) (total time: 12816ms):
	Trace[1864890241]: ---"Objects listed" error:Unauthorized 12815ms (19:15:02.273)
	Trace[1864890241]: [12.81602235s] [12.81602235s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3648": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3648": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3653": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3653": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3562": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3562": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3653": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3653": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 18:51] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061784] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072122] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.201564] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.131661] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.284243] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +4.067260] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +4.541515] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.060417] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.251462] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.088029] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.073110] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.070796] kauditd_printk_skb: 38 callbacks suppressed
	[Sep 6 18:52] kauditd_printk_skb: 24 callbacks suppressed
	[Sep 6 19:02] systemd-fstab-generator[3534]: Ignoring "noauto" option for root device
	[  +0.149812] systemd-fstab-generator[3546]: Ignoring "noauto" option for root device
	[  +0.178231] systemd-fstab-generator[3560]: Ignoring "noauto" option for root device
	[  +0.144887] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +0.283505] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +0.753951] systemd-fstab-generator[3694]: Ignoring "noauto" option for root device
	[  +6.401831] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.203445] kauditd_printk_skb: 87 callbacks suppressed
	[Sep 6 19:03] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [36d954de08dab8fdec4f4e9c2099c04e49471626e8cd1295338177df12282c4a] <==
	{"level":"warn","ts":"2024-09-06T19:15:23.285816Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T19:15:10.283240Z","time spent":"13.002570965s","remote":"127.0.0.1:53180","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":0,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-06T19:15:23.286032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T19:15:10.283216Z","time spent":"13.002765204s","remote":"127.0.0.1:53214","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":0,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 "}
	{"level":"info","ts":"2024-09-06T19:15:23.285794Z","caller":"traceutil/trace.go:171","msg":"trace[700300913] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"13.007785909s","start":"2024-09-06T19:15:10.278005Z","end":"2024-09-06T19:15:23.285791Z","steps":["trace[700300913] 'agreement among raft nodes before linearized reading'  (duration: 13.002923852s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T19:15:23.289461Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T19:15:10.277996Z","time spent":"13.011452045s","remote":"127.0.0.1:60630","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:10000 "}
	{"level":"info","ts":"2024-09-06T19:15:23.285711Z","caller":"traceutil/trace.go:171","msg":"trace[632599073] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"13.007658268s","start":"2024-09-06T19:15:10.278050Z","end":"2024-09-06T19:15:23.285708Z","steps":["trace[632599073] 'agreement among raft nodes before linearized reading'  (duration: 13.002816036s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T19:15:23.289586Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T19:15:10.278041Z","time spent":"13.011538068s","remote":"127.0.0.1:53186","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" limit:10000 "}
	{"level":"info","ts":"2024-09-06T19:15:23.286327Z","caller":"traceutil/trace.go:171","msg":"trace[331486913] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"13.006763456s","start":"2024-09-06T19:15:10.279560Z","end":"2024-09-06T19:15:23.286324Z","steps":["trace[331486913] 'agreement among raft nodes before linearized reading'  (duration: 12.995828973s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T19:15:23.289747Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T19:15:10.279551Z","time spent":"13.010186608s","remote":"127.0.0.1:60506","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-06T19:15:23.364688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"844.790005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-06T19:15:23.364749Z","caller":"traceutil/trace.go:171","msg":"trace[2072896443] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; }","duration":"844.865206ms","start":"2024-09-06T19:15:22.519873Z","end":"2024-09-06T19:15:23.364739Z","steps":["trace[2072896443] 'agreement among raft nodes before linearized reading'  (duration: 844.788661ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T19:15:23.364820Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T19:15:22.519841Z","time spent":"844.970804ms","remote":"127.0.0.1:53132","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-09-06T19:15:23.365224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.436869147s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-06T19:15:23.365269Z","caller":"traceutil/trace.go:171","msg":"trace[1744911337] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; }","duration":"5.43691726s","start":"2024-09-06T19:15:17.928345Z","end":"2024-09-06T19:15:23.365262Z","steps":["trace[1744911337] 'agreement among raft nodes before linearized reading'  (duration: 5.43686833s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-06T19:15:23.365293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-06T19:15:17.928306Z","time spent":"5.436981419s","remote":"127.0.0.1:53132","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" "}
	2024/09/06 19:15:23 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:15:23.770770Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3173227703714679382,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-06T19:15:24.172430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-06T19:15:24.172601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:15:24.172637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 received MsgPreVoteResp from d9e0442f914d2c09 at term 3"}
	{"level":"info","ts":"2024-09-06T19:15:24.172675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 [logterm: 3, index: 4357] sent MsgPreVote request to 5c50db72c01fb063 at term 3"}
	{"level":"warn","ts":"2024-09-06T19:15:24.271426Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3173227703714679382,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:15:24.545005Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5c50db72c01fb063","rtt":"8.124246ms","error":"dial tcp 192.168.39.32:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:15:24.552301Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5c50db72c01fb063","rtt":"854.847µs","error":"dial tcp 192.168.39.32:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-06T19:15:24.772608Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3173227703714679382,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-06T19:15:25.273249Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3173227703714679382,"retry-timeout":"500ms"}
	
	
	==> etcd [e32b22b9f83ac55a6499ec878dabcc82c14d67c9819bd5abacacc2668993fde8] <==
	2024/09/06 19:00:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/06 19:00:43 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-06T19:00:43.692458Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.70:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:00:43.692566Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.70:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:00:43.692746Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"d9e0442f914d2c09","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-06T19:00:43.692938Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.692970Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693066Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693169Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693208Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693273Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693377Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5c50db72c01fb063"}
	{"level":"info","ts":"2024-09-06T19:00:43.693398Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693561Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693580Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693692Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693802Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693879Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"d9e0442f914d2c09","remote-peer-id":"63c578731edaad90"}
	{"level":"info","ts":"2024-09-06T19:00:43.693927Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"63c578731edaad90"}
	{"level":"warn","ts":"2024-09-06T19:00:43.697978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.908057312s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-06T19:00:43.698055Z","caller":"traceutil/trace.go:171","msg":"trace[1664952415] range","detail":"{range_begin:; range_end:; }","duration":"1.908148083s","start":"2024-09-06T19:00:41.789897Z","end":"2024-09-06T19:00:43.698045Z","steps":["trace[1664952415] 'agreement among raft nodes before linearized reading'  (duration: 1.908055362s)"],"step_count":1}
	{"level":"error","ts":"2024-09-06T19:00:43.698121Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-06T19:00:43.697906Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-09-06T19:00:43.698981Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2024-09-06T19:00:43.699230Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-313128","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.70:2380"],"advertise-client-urls":["https://192.168.39.70:2379"]}
	
	
	==> kernel <==
	 19:15:25 up 24 min,  0 users,  load average: 0.30, 0.45, 0.30
	Linux ha-313128 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [76ca94f1530095701d6f0ba28cc76b32fce6144f3493efe29033a182a740a83b] <==
	I0906 19:00:13.769733       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:00:13.769887       1 main.go:299] handling current node
	I0906 19:00:13.770039       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:00:13.770068       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:00:13.770242       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:00:13.770325       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:00:13.770653       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:00:13.770688       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:00:23.769408       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:00:23.769600       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:00:23.769804       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:00:23.769858       1 main.go:299] handling current node
	I0906 19:00:23.769886       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:00:23.769939       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:00:23.770105       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:00:23.770150       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	E0906 19:00:26.750051       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1815&timeout=5m45s&timeoutSeconds=345&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0906 19:00:33.769131       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:00:33.769253       1 main.go:299] handling current node
	I0906 19:00:33.769290       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:00:33.769309       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:00:33.769557       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0906 19:00:33.769593       1 main.go:322] Node ha-313128-m03 has CIDR [10.244.2.0/24] 
	I0906 19:00:33.769784       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:00:33.769821       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d3e14bee704aa69ed8c1c03e417161e9916fcc59368ec09b7921f208aff9c0f0] <==
	I0906 19:14:54.879838       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:14:54.879854       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:15:04.880109       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:15:04.880249       1 main.go:299] handling current node
	I0906 19:15:04.880286       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:15:04.880310       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:15:04.880618       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:15:04.880660       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	I0906 19:15:14.876126       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:15:14.876273       1 main.go:299] handling current node
	I0906 19:15:14.876324       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:15:14.876387       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:15:14.876767       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:15:14.876808       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	W0906 19:15:23.292082       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	I0906 19:15:23.292976       1 trace.go:236] Trace[223933388]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (06-Sep-2024 19:15:12.628) (total time: 10664ms):
	Trace[223933388]: ---"Objects listed" error:Unauthorized 10663ms (19:15:23.292)
	Trace[223933388]: [10.664639562s] [10.664639562s] END
	E0906 19:15:23.293185       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	I0906 19:15:24.874625       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0906 19:15:24.874664       1 main.go:299] handling current node
	I0906 19:15:24.874691       1 main.go:295] Handling node with IPs: map[192.168.39.32:{}]
	I0906 19:15:24.874700       1 main.go:322] Node ha-313128-m02 has CIDR [10.244.1.0/24] 
	I0906 19:15:24.874929       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0906 19:15:24.874941       1 main.go:322] Node ha-313128-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [78aafa2222cb34f7484f1189f1e14efe6a66294464a77ccd135d665024e833ea] <==
	E0906 19:15:23.286899       1 cacher.go:478] cacher (secrets): unexpected ListAndWatch error: failed to list *core.Secret: etcdserver: request timed out; reinitializing...
	E0906 19:15:23.286928       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	W0906 19:15:23.287117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: etcdserver: request timed out
	E0906 19:15:23.287159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: etcdserver: request timed out" logger="UnhandledError"
	W0906 19:15:23.287790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: etcdserver: request timed out
	E0906 19:15:23.287845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: etcdserver: request timed out" logger="UnhandledError"
	W0906 19:15:23.287895       1 reflector.go:561] storage/cacher.go:/ingressclasses: failed to list *networking.IngressClass: etcdserver: request timed out
	E0906 19:15:23.287919       1 cacher.go:478] cacher (ingressclasses.networking.k8s.io): unexpected ListAndWatch error: failed to list *networking.IngressClass: etcdserver: request timed out; reinitializing...
	W0906 19:15:23.287962       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	F0906 19:15:23.288009       1 hooks.go:210] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition
	E0906 19:15:23.337089       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0906 19:15:23.347300       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: etcdserver: request timed out" logger="UnhandledError"
	E0906 19:15:23.347308       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E0906 19:15:23.347374       1 controller.go:145] "Failed to ensure lease exists, will retry" err="etcdserver: request timed out" interval="1.6s"
	W0906 19:15:23.347440       1 reflector.go:561] storage/cacher.go:/apiregistration.k8s.io/apiservices: failed to list *apiregistration.APIService: etcdserver: request timed out
	W0906 19:15:23.304611       1 reflector.go:561] storage/cacher.go:/horizontalpodautoscalers: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out
	W0906 19:15:23.287923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ClusterRoleBinding: etcdserver: request timed out
	W0906 19:15:23.336855       1 reflector.go:561] storage/cacher.go:/leases: failed to list *coordination.Lease: etcdserver: request timed out
	W0906 19:15:23.336922       1 reflector.go:561] storage/cacher.go:/validatingadmissionpolicies: failed to list *admissionregistration.ValidatingAdmissionPolicy: etcdserver: request timed out
	W0906 19:15:23.336946       1 reflector.go:561] storage/cacher.go:/ingress: failed to list *networking.Ingress: etcdserver: request timed out
	W0906 19:15:23.336964       1 reflector.go:561] storage/cacher.go:/poddisruptionbudgets: failed to list *policy.PodDisruptionBudget: etcdserver: request timed out
	W0906 19:15:23.336986       1 reflector.go:561] storage/cacher.go:/clusterroles: failed to list *rbac.ClusterRole: etcdserver: request timed out
	W0906 19:15:23.337006       1 reflector.go:561] storage/cacher.go:/storageclasses: failed to list *storage.StorageClass: etcdserver: request timed out
	W0906 19:15:23.337030       1 reflector.go:561] storage/cacher.go:/cronjobs: failed to list *batch.CronJob: etcdserver: request timed out
	W0906 19:15:23.337049       1 reflector.go:561] storage/cacher.go:/pods: failed to list *core.Pod: etcdserver: request timed out
	
	
	==> kube-controller-manager [1cfd32c774cafdca510436c4dbe68681fda9bbd0079c9cf53e90bb1adbbc5ced] <==
	E0906 19:15:22.411530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ControllerRevision: failed to list *v1.ControllerRevision: controllerrevisions.apps is forbidden: User \"system:kube-controller-manager\" cannot list resource \"controllerrevisions\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:22.858450       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0906 19:15:22.858682       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-313128-m02"
	E0906 19:15:22.858855       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.70:8443/api/v1/nodes/ha-313128-m02\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0906 19:15:22.859640       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0906 19:15:23.782208       1 gc_controller.go:151] "Failed to get node" err="node \"ha-313128-m03\" not found" logger="pod-garbage-collector-controller" node="ha-313128-m03"
	E0906 19:15:23.782260       1 gc_controller.go:151] "Failed to get node" err="node \"ha-313128-m03\" not found" logger="pod-garbage-collector-controller" node="ha-313128-m03"
	E0906 19:15:23.782267       1 gc_controller.go:151] "Failed to get node" err="node \"ha-313128-m03\" not found" logger="pod-garbage-collector-controller" node="ha-313128-m03"
	E0906 19:15:23.782273       1 gc_controller.go:151] "Failed to get node" err="node \"ha-313128-m03\" not found" logger="pod-garbage-collector-controller" node="ha-313128-m03"
	E0906 19:15:23.782278       1 gc_controller.go:151] "Failed to get node" err="node \"ha-313128-m03\" not found" logger="pod-garbage-collector-controller" node="ha-313128-m03"
	W0906 19:15:23.782903       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.70:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.70:8443: connect: connection refused
	W0906 19:15:24.095975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CronJob: Get "https://192.168.39.70:8443/apis/batch/v1/cronjobs?resourceVersion=3653": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:15:24.096173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CronJob: failed to list *v1.CronJob: Get \"https://192.168.39.70:8443/apis/batch/v1/cronjobs?resourceVersion=3653\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:15:24.283622       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.70:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.70:8443: connect: connection refused
	W0906 19:15:24.359716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ConfigMap: Get "https://192.168.39.70:8443/api/v1/configmaps?resourceVersion=3652": dial tcp 192.168.39.70:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	E0906 19:15:24.359845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.70:8443/api/v1/configmaps?resourceVersion=3652\": dial tcp 192.168.39.70:8443: connect: connection refused - error from a previous attempt: unexpected EOF" logger="UnhandledError"
	W0906 19:15:24.359716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Secret: Get "https://192.168.39.70:8443/api/v1/secrets?resourceVersion=3624": dial tcp 192.168.39.70:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	E0906 19:15:24.359907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://192.168.39.70:8443/api/v1/secrets?resourceVersion=3624\": dial tcp 192.168.39.70:8443: connect: connection refused - error from a previous attempt: unexpected EOF" logger="UnhandledError"
	W0906 19:15:24.363306       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.70:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.70:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.70:55152->192.168.39.70:8443: read: connection reset by peer
	W0906 19:15:24.902158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Endpoints: Get "https://192.168.39.70:8443/api/v1/endpoints?resourceVersion=3650": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:15:24.902243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get \"https://192.168.39.70:8443/api/v1/endpoints?resourceVersion=3650\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:15:25.240023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Deployment: Get "https://192.168.39.70:8443/apis/apps/v1/deployments?resourceVersion=3653": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:15:25.240093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Deployment: failed to list *v1.Deployment: Get \"https://192.168.39.70:8443/apis/apps/v1/deployments?resourceVersion=3653\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:15:25.284696       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.70:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.70:8443: connect: connection refused
	W0906 19:15:25.364310       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.70:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.70:8443: connect: connection refused
	
	
	==> kube-controller-manager [7fab375b2e00c6c1c477e49d20575c282cf15631db08117b7cbd6669002057a7] <==
	I0906 19:02:25.178571       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:02:25.751144       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:02:25.751189       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:02:25.753093       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:02:25.753241       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:02:25.753740       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:02:25.753823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0906 19:02:45.757127       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.70:8443/healthz\": dial tcp 192.168.39.70:8443: connect: connection refused"
	
	
	==> kube-proxy [135074e446370bfc1724716998ecd9329de93589a40126455b88401430e55ef1] <==
	E0906 18:59:34.909836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:37.981970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:37.982117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:37.982239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:37.982324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:37.982672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:37.982810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:44.125613       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:44.125832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:44.125977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:44.126031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:47.196966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:47.197099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:53.343030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:53.343098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 18:59:53.343233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 18:59:53.343270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:02.557917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:02.558078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:11.774788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:11.774869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1763\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:20.989983       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:20.990159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=1837\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:00:24.061791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:00:24.062012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1736\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [77c80de1adc0a59b2ca09f01724fcf628295a594a72819ee569328b61827713b] <==
	E0906 19:02:48.446328       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0906 19:03:04.718998       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.70"]
	E0906 19:03:04.719127       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:03:04.788646       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:03:04.788721       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:03:04.790601       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:03:04.795556       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:03:04.800789       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:03:04.800820       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:03:04.804328       1 config.go:197] "Starting service config controller"
	I0906 19:03:04.804405       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:03:04.804517       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:03:04.804524       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:03:04.807066       1 config.go:326] "Starting node config controller"
	I0906 19:03:04.807107       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:03:04.905154       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:03:04.905252       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:03:04.907240       1 shared_informer.go:320] Caches are synced for node config
	E0906 19:14:17.598529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3532&timeout=8m54s&timeoutSeconds=534&watch=true\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0906 19:14:17.598887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dha-313128&resourceVersion=3566&timeout=7m44s&timeoutSeconds=464&watch=true\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0906 19:14:42.174990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3538&timeout=9m10s&timeoutSeconds=550&watch=true\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:15:00.605864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3532": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:15:00.606440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=3532\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0906 19:15:19.038153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=3566": dial tcp 192.168.39.254:8443: connect: no route to host
	E0906 19:15:19.038297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-313128&resourceVersion=3566\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [a406aeec4330385ddeed24c98e7f74fbdd9eadca96884e16721a19d11e3a137f] <==
	E0906 18:54:39.143315       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8tm7b\": pod kube-proxy-8tm7b is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-8tm7b"
	I0906 18:54:39.143372       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8tm7b" node="ha-313128-m04"
	E0906 18:54:39.143180       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.144192       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod fdc10711-7099-424e-885e-65589f5642e5(kube-system/kindnet-k9szn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-k9szn"
	E0906 18:54:39.144252       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-k9szn\": pod kindnet-k9szn is already assigned to node \"ha-313128-m04\"" pod="kube-system/kindnet-k9szn"
	I0906 18:54:39.144297       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-k9szn" node="ha-313128-m04"
	E0906 18:54:39.236601       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	E0906 18:54:39.236925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rnm78\": pod kube-proxy-rnm78 is already assigned to node \"ha-313128-m04\"" pod="kube-system/kube-proxy-rnm78"
	I0906 18:54:39.240895       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rnm78" node="ha-313128-m04"
	E0906 19:00:32.447945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0906 19:00:34.548228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:34.780196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0906 19:00:34.781245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0906 19:00:34.940096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:36.433926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0906 19:00:36.554090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0906 19:00:38.972401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0906 19:00:38.989303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:40.451678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0906 19:00:40.700326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0906 19:00:41.073589       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0906 19:00:41.963164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0906 19:00:42.456392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	I0906 19:00:43.531312       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0906 19:00:43.532028       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bea01e33385d86bae3fc823d3986868525ccd7ff76ba0750e16e24a7b1229ec9] <==
	E0906 19:14:58.917421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:00.406059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:15:00.406117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:03.618441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 19:15:03.618584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:04.296707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 19:15:04.296812       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:04.678762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 19:15:04.678985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:08.729294       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 19:15:08.729550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:14.206700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 19:15:14.206830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:18.275780       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 19:15:18.276572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:19.694452       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 19:15:19.694622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:20.781194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 19:15:20.781379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:15:23.509022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.70:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=3653": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:15:23.509099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.70:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=3653\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:15:24.042811       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.70:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=3619": dial tcp 192.168.39.70:8443: connect: connection refused
	E0906 19:15:24.042879       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.70:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=3619\": dial tcp 192.168.39.70:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:15:24.359404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.70:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=3653": dial tcp 192.168.39.70:8443: connect: connection refused - error from a previous attempt: unexpected EOF
	E0906 19:15:24.359990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.70:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=3653\": dial tcp 192.168.39.70:8443: connect: connection refused - error from a previous attempt: unexpected EOF" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 06 19:15:15 ha-313128 kubelet[1323]: E0906 19:15:15.866261    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725650115865669920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:15:15 ha-313128 kubelet[1323]: E0906 19:15:15.964840    1323 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-313128.17f2bcbd22e17fdc\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-313128.17f2bcbd22e17fdc  kube-system   2018 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-313128,UID:19f5824a415bb48f2bb6ab3144efbec6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-313128,},FirstTimestamp:2024-09-06 18:58:47 +0000 UTC,LastTimestamp:2024-09-06 19:12:44.76007944 +0000 UTC m=+1279.499207799,Count:29,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Seri
es:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-313128,}"
	Sep 06 19:15:15 ha-313128 kubelet[1323]: I0906 19:15:15.965019    1323 status_manager.go:851] "Failed to get status for pod" podUID="19f5824a415bb48f2bb6ab3144efbec6" pod="kube-system/kube-apiserver-ha-313128" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 06 19:15:15 ha-313128 kubelet[1323]: E0906 19:15:15.965256    1323 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-313128\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-313128?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 06 19:15:15 ha-313128 kubelet[1323]: E0906 19:15:15.965346    1323 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	Sep 06 19:15:19 ha-313128 kubelet[1323]: W0906 19:15:19.036866    1323 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=3620": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 06 19:15:19 ha-313128 kubelet[1323]: E0906 19:15:19.036970    1323 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=3620\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 06 19:15:19 ha-313128 kubelet[1323]: I0906 19:15:19.037073    1323 status_manager.go:851] "Failed to get status for pod" podUID="9cddf482287bf3b2dbb1236f43dc96c3" pod="kube-system/etcd-ha-313128" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 06 19:15:19 ha-313128 kubelet[1323]: E0906 19:15:19.037790    1323 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-313128?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Sep 06 19:15:22 ha-313128 kubelet[1323]: W0906 19:15:22.109023    1323 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=3484": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 06 19:15:22 ha-313128 kubelet[1323]: E0906 19:15:22.109433    1323 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=3484\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 06 19:15:22 ha-313128 kubelet[1323]: I0906 19:15:22.109038    1323 status_manager.go:851] "Failed to get status for pod" podUID="19f5824a415bb48f2bb6ab3144efbec6" pod="kube-system/kube-apiserver-ha-313128" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-313128\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 06 19:15:23 ha-313128 kubelet[1323]: I0906 19:15:23.888462    1323 scope.go:117] "RemoveContainer" containerID="8ef80321a59675e85d0517bd38e7c6d27c0438cd7afacb02d61bf74a53d7ff40"
	Sep 06 19:15:23 ha-313128 kubelet[1323]: I0906 19:15:23.888912    1323 scope.go:117] "RemoveContainer" containerID="78aafa2222cb34f7484f1189f1e14efe6a66294464a77ccd135d665024e833ea"
	Sep 06 19:15:23 ha-313128 kubelet[1323]: E0906 19:15:23.889091    1323 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-313128_kube-system(19f5824a415bb48f2bb6ab3144efbec6)\"" pod="kube-system/kube-apiserver-ha-313128" podUID="19f5824a415bb48f2bb6ab3144efbec6"
	Sep 06 19:15:25 ha-313128 kubelet[1323]: W0906 19:15:25.181867    1323 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=3484": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 06 19:15:25 ha-313128 kubelet[1323]: E0906 19:15:25.181942    1323 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=3484\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 06 19:15:25 ha-313128 kubelet[1323]: I0906 19:15:25.182006    1323 status_manager.go:851] "Failed to get status for pod" podUID="6c957eac-7904-4c39-b858-bfb7da32c75c" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 06 19:15:25 ha-313128 kubelet[1323]: I0906 19:15:25.279863    1323 scope.go:117] "RemoveContainer" containerID="78aafa2222cb34f7484f1189f1e14efe6a66294464a77ccd135d665024e833ea"
	Sep 06 19:15:25 ha-313128 kubelet[1323]: E0906 19:15:25.280005    1323 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-313128_kube-system(19f5824a415bb48f2bb6ab3144efbec6)\"" pod="kube-system/kube-apiserver-ha-313128" podUID="19f5824a415bb48f2bb6ab3144efbec6"
	Sep 06 19:15:25 ha-313128 kubelet[1323]: E0906 19:15:25.523523    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:15:25 ha-313128 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:15:25 ha-313128 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:15:25 ha-313128 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:15:25 ha-313128 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:15:24.493130   34761 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19576-6021/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-313128 -n ha-313128
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-313128 -n ha-313128: exit status 2 (227.534317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-313128" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (173.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-002640
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-002640
E0906 19:29:49.187698   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-002640: exit status 82 (2m1.844311508s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-002640-m03"  ...
	* Stopping node "multinode-002640-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-002640" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-002640 --wait=true -v=8 --alsologtostderr
E0906 19:31:44.178661   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-002640 --wait=true -v=8 --alsologtostderr: (3m22.282360178s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-002640
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-002640 -n multinode-002640
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-002640 logs -n 25: (1.463623446s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m02:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3017084892/001/cp-test_multinode-002640-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m02:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640:/home/docker/cp-test_multinode-002640-m02_multinode-002640.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640 sudo cat                                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m02_multinode-002640.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m02:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03:/home/docker/cp-test_multinode-002640-m02_multinode-002640-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640-m03 sudo cat                                   | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m02_multinode-002640-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp testdata/cp-test.txt                                                | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3017084892/001/cp-test_multinode-002640-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640:/home/docker/cp-test_multinode-002640-m03_multinode-002640.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640 sudo cat                                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m03_multinode-002640.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02:/home/docker/cp-test_multinode-002640-m03_multinode-002640-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640-m02 sudo cat                                   | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m03_multinode-002640-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-002640 node stop m03                                                          | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	| node    | multinode-002640 node start                                                             | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:28 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-002640                                                                | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:28 UTC |                     |
	| stop    | -p multinode-002640                                                                     | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:28 UTC |                     |
	| start   | -p multinode-002640                                                                     | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:30 UTC | 06 Sep 24 19:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-002640                                                                | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 19:30:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:30:18.359400   44141 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:30:18.359637   44141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:30:18.359645   44141 out.go:358] Setting ErrFile to fd 2...
	I0906 19:30:18.359649   44141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:30:18.359820   44141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:30:18.360332   44141 out.go:352] Setting JSON to false
	I0906 19:30:18.361217   44141 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4367,"bootTime":1725646651,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:30:18.361275   44141 start.go:139] virtualization: kvm guest
	I0906 19:30:18.363247   44141 out.go:177] * [multinode-002640] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:30:18.364505   44141 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:30:18.364509   44141 notify.go:220] Checking for updates...
	I0906 19:30:18.366816   44141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:30:18.367983   44141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:30:18.369023   44141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:30:18.370154   44141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:30:18.371280   44141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:30:18.372843   44141 config.go:182] Loaded profile config "multinode-002640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:30:18.372952   44141 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:30:18.373382   44141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:30:18.373458   44141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:30:18.388035   44141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0906 19:30:18.388451   44141 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:30:18.389002   44141 main.go:141] libmachine: Using API Version  1
	I0906 19:30:18.389022   44141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:30:18.389364   44141 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:30:18.389581   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:30:18.424352   44141 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 19:30:18.425395   44141 start.go:297] selected driver: kvm2
	I0906 19:30:18.425410   44141 start.go:901] validating driver "kvm2" against &{Name:multinode-002640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-p
rovisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:30:18.425603   44141 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:30:18.425962   44141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:30:18.426034   44141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:30:18.440182   44141 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:30:18.441134   44141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:30:18.441175   44141 cni.go:84] Creating CNI manager for ""
	I0906 19:30:18.441183   44141 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 19:30:18.441255   44141 start.go:340] cluster config:
	{Name:multinode-002640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false
metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:30:18.441412   44141 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:30:18.443422   44141 out.go:177] * Starting "multinode-002640" primary control-plane node in "multinode-002640" cluster
	I0906 19:30:18.444566   44141 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:30:18.444601   44141 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:30:18.444611   44141 cache.go:56] Caching tarball of preloaded images
	I0906 19:30:18.444686   44141 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:30:18.444699   44141 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:30:18.444816   44141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/config.json ...
	I0906 19:30:18.445030   44141 start.go:360] acquireMachinesLock for multinode-002640: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:30:18.445072   44141 start.go:364] duration metric: took 24.266µs to acquireMachinesLock for "multinode-002640"
	I0906 19:30:18.445085   44141 start.go:96] Skipping create...Using existing machine configuration
	I0906 19:30:18.445090   44141 fix.go:54] fixHost starting: 
	I0906 19:30:18.445356   44141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:30:18.445391   44141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:30:18.460181   44141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46333
	I0906 19:30:18.460569   44141 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:30:18.461023   44141 main.go:141] libmachine: Using API Version  1
	I0906 19:30:18.461049   44141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:30:18.461394   44141 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:30:18.461583   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:30:18.461723   44141 main.go:141] libmachine: (multinode-002640) Calling .GetState
	I0906 19:30:18.463405   44141 fix.go:112] recreateIfNeeded on multinode-002640: state=Running err=<nil>
	W0906 19:30:18.463432   44141 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 19:30:18.465309   44141 out.go:177] * Updating the running kvm2 "multinode-002640" VM ...
	I0906 19:30:18.466360   44141 machine.go:93] provisionDockerMachine start ...
	I0906 19:30:18.466381   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:30:18.466601   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.469095   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.469520   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.469555   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.469730   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:18.469886   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.470027   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.470193   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:18.470403   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:30:18.470654   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:30:18.470673   44141 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 19:30:18.582051   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-002640
	
	I0906 19:30:18.582080   44141 main.go:141] libmachine: (multinode-002640) Calling .GetMachineName
	I0906 19:30:18.582357   44141 buildroot.go:166] provisioning hostname "multinode-002640"
	I0906 19:30:18.582381   44141 main.go:141] libmachine: (multinode-002640) Calling .GetMachineName
	I0906 19:30:18.582571   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.585086   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.585436   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.585458   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.585569   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:18.585716   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.585869   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.585986   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:18.586119   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:30:18.586311   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:30:18.586333   44141 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-002640 && echo "multinode-002640" | sudo tee /etc/hostname
	I0906 19:30:18.708399   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-002640
	
	I0906 19:30:18.708425   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.711246   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.711583   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.711634   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.711911   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:18.712093   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.712283   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.712481   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:18.712646   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:30:18.712886   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:30:18.712912   44141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-002640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-002640/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-002640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:30:18.822640   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:30:18.822669   44141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:30:18.822685   44141 buildroot.go:174] setting up certificates
	I0906 19:30:18.822693   44141 provision.go:84] configureAuth start
	I0906 19:30:18.822700   44141 main.go:141] libmachine: (multinode-002640) Calling .GetMachineName
	I0906 19:30:18.822970   44141 main.go:141] libmachine: (multinode-002640) Calling .GetIP
	I0906 19:30:18.825909   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.826423   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.826443   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.826650   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.829103   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.829463   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.829498   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.829665   44141 provision.go:143] copyHostCerts
	I0906 19:30:18.829697   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:30:18.829737   44141 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:30:18.829757   44141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:30:18.829837   44141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:30:18.829949   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:30:18.829975   44141 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:30:18.829982   44141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:30:18.830026   44141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:30:18.830105   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:30:18.830137   44141 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:30:18.830146   44141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:30:18.830186   44141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:30:18.830268   44141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.multinode-002640 san=[127.0.0.1 192.168.39.11 localhost minikube multinode-002640]
	I0906 19:30:18.958949   44141 provision.go:177] copyRemoteCerts
	I0906 19:30:18.959011   44141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:30:18.959050   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.961879   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.962204   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.962229   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.962450   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:18.962633   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.962823   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:18.962934   44141 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640/id_rsa Username:docker}
	I0906 19:30:19.049693   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 19:30:19.049772   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:30:19.078166   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 19:30:19.078232   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 19:30:19.106308   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 19:30:19.106382   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 19:30:19.132542   44141 provision.go:87] duration metric: took 309.840007ms to configureAuth
	I0906 19:30:19.132572   44141 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:30:19.132780   44141 config.go:182] Loaded profile config "multinode-002640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:30:19.132842   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:19.135345   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:19.135706   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:19.135748   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:19.135889   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:19.136060   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:19.136241   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:19.136382   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:19.136538   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:30:19.136707   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:30:19.136721   44141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:31:49.794349   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:31:49.794375   44141 machine.go:96] duration metric: took 1m31.328001388s to provisionDockerMachine
	I0906 19:31:49.794388   44141 start.go:293] postStartSetup for "multinode-002640" (driver="kvm2")
	I0906 19:31:49.794399   44141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:31:49.794416   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:49.794763   44141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:31:49.794798   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:31:49.798045   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:49.798523   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:49.798546   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:49.798760   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:31:49.798953   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:49.799104   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:31:49.799242   44141 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640/id_rsa Username:docker}
	I0906 19:31:49.884200   44141 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:31:49.888428   44141 command_runner.go:130] > NAME=Buildroot
	I0906 19:31:49.888451   44141 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0906 19:31:49.888458   44141 command_runner.go:130] > ID=buildroot
	I0906 19:31:49.888465   44141 command_runner.go:130] > VERSION_ID=2023.02.9
	I0906 19:31:49.888477   44141 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0906 19:31:49.888512   44141 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:31:49.888531   44141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:31:49.888584   44141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:31:49.888661   44141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:31:49.888669   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 19:31:49.888745   44141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:31:49.899140   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:31:49.923154   44141 start.go:296] duration metric: took 128.75305ms for postStartSetup
	I0906 19:31:49.923203   44141 fix.go:56] duration metric: took 1m31.478112603s for fixHost
	I0906 19:31:49.923226   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:31:49.925945   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:49.926297   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:49.926321   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:49.926472   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:31:49.926683   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:49.926873   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:49.927016   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:31:49.927159   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:31:49.927372   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:31:49.927384   44141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:31:50.037836   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725651110.007504213
	
	I0906 19:31:50.037858   44141 fix.go:216] guest clock: 1725651110.007504213
	I0906 19:31:50.037865   44141 fix.go:229] Guest: 2024-09-06 19:31:50.007504213 +0000 UTC Remote: 2024-09-06 19:31:49.923208502 +0000 UTC m=+91.597491316 (delta=84.295711ms)
	I0906 19:31:50.037883   44141 fix.go:200] guest clock delta is within tolerance: 84.295711ms
	I0906 19:31:50.037887   44141 start.go:83] releasing machines lock for "multinode-002640", held for 1m31.592808597s
	I0906 19:31:50.037904   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:50.038179   44141 main.go:141] libmachine: (multinode-002640) Calling .GetIP
	I0906 19:31:50.041081   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.041525   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:50.041554   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.041660   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:50.042202   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:50.042382   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:50.042488   44141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:31:50.042526   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:31:50.042634   44141 ssh_runner.go:195] Run: cat /version.json
	I0906 19:31:50.042661   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:31:50.045139   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.045481   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.045521   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:50.045541   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.045693   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:31:50.045839   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:50.045999   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:50.046023   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:31:50.046025   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.046159   44141 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640/id_rsa Username:docker}
	I0906 19:31:50.046173   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:31:50.046325   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:50.046478   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:31:50.046650   44141 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640/id_rsa Username:docker}
	I0906 19:31:50.152801   44141 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 19:31:50.152889   44141 command_runner.go:130] > {"iso_version": "v1.34.0", "kicbase_version": "v0.0.44-1724862063-19530", "minikube_version": "v1.34.0", "commit": "613a681f9f90c87e637792fcb55bc4d32fe5c29c"}
	I0906 19:31:50.153018   44141 ssh_runner.go:195] Run: systemctl --version
	I0906 19:31:50.159012   44141 command_runner.go:130] > systemd 252 (252)
	I0906 19:31:50.159053   44141 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0906 19:31:50.159124   44141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:31:50.322904   44141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 19:31:50.328799   44141 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0906 19:31:50.328845   44141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:31:50.328916   44141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:31:50.338105   44141 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 19:31:50.338130   44141 start.go:495] detecting cgroup driver to use...
	I0906 19:31:50.338180   44141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:31:50.354405   44141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:31:50.368385   44141 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:31:50.368457   44141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:31:50.382453   44141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:31:50.397064   44141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:31:50.561682   44141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:31:50.706749   44141 docker.go:233] disabling docker service ...
	I0906 19:31:50.706821   44141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:31:50.723368   44141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:31:50.736800   44141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:31:50.872096   44141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:31:51.009108   44141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:31:51.022542   44141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:31:51.041233   44141 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0906 19:31:51.041267   44141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 19:31:51.041306   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.051541   44141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:31:51.051602   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.062095   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.071925   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.081827   44141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:31:51.091955   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.101991   44141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.113254   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.123919   44141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:31:51.133077   44141 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 19:31:51.133142   44141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:31:51.142166   44141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:31:51.281135   44141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:31:58.670116   44141 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.388943235s)
	I0906 19:31:58.670154   44141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:31:58.670207   44141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:31:58.675579   44141 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0906 19:31:58.675600   44141 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 19:31:58.675607   44141 command_runner.go:130] > Device: 0,22	Inode: 1322        Links: 1
	I0906 19:31:58.675615   44141 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 19:31:58.675623   44141 command_runner.go:130] > Access: 2024-09-06 19:31:58.533357989 +0000
	I0906 19:31:58.675642   44141 command_runner.go:130] > Modify: 2024-09-06 19:31:58.533357989 +0000
	I0906 19:31:58.675650   44141 command_runner.go:130] > Change: 2024-09-06 19:31:58.533357989 +0000
	I0906 19:31:58.675659   44141 command_runner.go:130] >  Birth: -
	I0906 19:31:58.675735   44141 start.go:563] Will wait 60s for crictl version
	I0906 19:31:58.675780   44141 ssh_runner.go:195] Run: which crictl
	I0906 19:31:58.679396   44141 command_runner.go:130] > /usr/bin/crictl
	I0906 19:31:58.679530   44141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:31:58.714616   44141 command_runner.go:130] > Version:  0.1.0
	I0906 19:31:58.714643   44141 command_runner.go:130] > RuntimeName:  cri-o
	I0906 19:31:58.714647   44141 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0906 19:31:58.714653   44141 command_runner.go:130] > RuntimeApiVersion:  v1
	I0906 19:31:58.714669   44141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:31:58.714742   44141 ssh_runner.go:195] Run: crio --version
	I0906 19:31:58.743616   44141 command_runner.go:130] > crio version 1.29.1
	I0906 19:31:58.743640   44141 command_runner.go:130] > Version:        1.29.1
	I0906 19:31:58.743648   44141 command_runner.go:130] > GitCommit:      unknown
	I0906 19:31:58.743653   44141 command_runner.go:130] > GitCommitDate:  unknown
	I0906 19:31:58.743658   44141 command_runner.go:130] > GitTreeState:   clean
	I0906 19:31:58.743666   44141 command_runner.go:130] > BuildDate:      2024-09-03T22:31:57Z
	I0906 19:31:58.743671   44141 command_runner.go:130] > GoVersion:      go1.21.6
	I0906 19:31:58.743677   44141 command_runner.go:130] > Compiler:       gc
	I0906 19:31:58.743684   44141 command_runner.go:130] > Platform:       linux/amd64
	I0906 19:31:58.743694   44141 command_runner.go:130] > Linkmode:       dynamic
	I0906 19:31:58.743701   44141 command_runner.go:130] > BuildTags:      
	I0906 19:31:58.743707   44141 command_runner.go:130] >   containers_image_ostree_stub
	I0906 19:31:58.743713   44141 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0906 19:31:58.743720   44141 command_runner.go:130] >   btrfs_noversion
	I0906 19:31:58.743731   44141 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0906 19:31:58.743739   44141 command_runner.go:130] >   libdm_no_deferred_remove
	I0906 19:31:58.743764   44141 command_runner.go:130] >   seccomp
	I0906 19:31:58.743774   44141 command_runner.go:130] > LDFlags:          unknown
	I0906 19:31:58.743780   44141 command_runner.go:130] > SeccompEnabled:   true
	I0906 19:31:58.743786   44141 command_runner.go:130] > AppArmorEnabled:  false
	I0906 19:31:58.743851   44141 ssh_runner.go:195] Run: crio --version
	I0906 19:31:58.770470   44141 command_runner.go:130] > crio version 1.29.1
	I0906 19:31:58.770493   44141 command_runner.go:130] > Version:        1.29.1
	I0906 19:31:58.770519   44141 command_runner.go:130] > GitCommit:      unknown
	I0906 19:31:58.770525   44141 command_runner.go:130] > GitCommitDate:  unknown
	I0906 19:31:58.770530   44141 command_runner.go:130] > GitTreeState:   clean
	I0906 19:31:58.770538   44141 command_runner.go:130] > BuildDate:      2024-09-03T22:31:57Z
	I0906 19:31:58.770544   44141 command_runner.go:130] > GoVersion:      go1.21.6
	I0906 19:31:58.770550   44141 command_runner.go:130] > Compiler:       gc
	I0906 19:31:58.770557   44141 command_runner.go:130] > Platform:       linux/amd64
	I0906 19:31:58.770564   44141 command_runner.go:130] > Linkmode:       dynamic
	I0906 19:31:58.770586   44141 command_runner.go:130] > BuildTags:      
	I0906 19:31:58.770596   44141 command_runner.go:130] >   containers_image_ostree_stub
	I0906 19:31:58.770603   44141 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0906 19:31:58.770610   44141 command_runner.go:130] >   btrfs_noversion
	I0906 19:31:58.770620   44141 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0906 19:31:58.770627   44141 command_runner.go:130] >   libdm_no_deferred_remove
	I0906 19:31:58.770634   44141 command_runner.go:130] >   seccomp
	I0906 19:31:58.770641   44141 command_runner.go:130] > LDFlags:          unknown
	I0906 19:31:58.770649   44141 command_runner.go:130] > SeccompEnabled:   true
	I0906 19:31:58.770658   44141 command_runner.go:130] > AppArmorEnabled:  false
	I0906 19:31:58.773448   44141 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 19:31:58.774493   44141 main.go:141] libmachine: (multinode-002640) Calling .GetIP
	I0906 19:31:58.777060   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:58.777350   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:58.777375   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:58.777577   44141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 19:31:58.781689   44141 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0906 19:31:58.781889   44141 kubeadm.go:883] updating cluster {Name:multinode-002640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:31:58.782026   44141 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:31:58.782064   44141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:31:58.828445   44141 command_runner.go:130] > {
	I0906 19:31:58.828465   44141 command_runner.go:130] >   "images": [
	I0906 19:31:58.828470   44141 command_runner.go:130] >     {
	I0906 19:31:58.828477   44141 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0906 19:31:58.828481   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828486   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0906 19:31:58.828490   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828494   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828510   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0906 19:31:58.828516   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0906 19:31:58.828520   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828524   44141 command_runner.go:130] >       "size": "87165492",
	I0906 19:31:58.828528   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828532   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828538   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828542   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828545   44141 command_runner.go:130] >     },
	I0906 19:31:58.828555   44141 command_runner.go:130] >     {
	I0906 19:31:58.828561   44141 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0906 19:31:58.828565   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828570   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0906 19:31:58.828574   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828578   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828585   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0906 19:31:58.828595   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0906 19:31:58.828599   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828603   44141 command_runner.go:130] >       "size": "87190579",
	I0906 19:31:58.828607   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828616   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828620   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828624   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828627   44141 command_runner.go:130] >     },
	I0906 19:31:58.828631   44141 command_runner.go:130] >     {
	I0906 19:31:58.828636   44141 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0906 19:31:58.828641   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828645   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0906 19:31:58.828649   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828653   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828663   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0906 19:31:58.828669   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0906 19:31:58.828675   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828680   44141 command_runner.go:130] >       "size": "1363676",
	I0906 19:31:58.828683   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828688   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828691   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828695   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828699   44141 command_runner.go:130] >     },
	I0906 19:31:58.828702   44141 command_runner.go:130] >     {
	I0906 19:31:58.828708   44141 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0906 19:31:58.828713   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828717   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0906 19:31:58.828721   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828727   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828738   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0906 19:31:58.828754   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0906 19:31:58.828761   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828765   44141 command_runner.go:130] >       "size": "31470524",
	I0906 19:31:58.828769   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828773   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828778   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828784   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828787   44141 command_runner.go:130] >     },
	I0906 19:31:58.828790   44141 command_runner.go:130] >     {
	I0906 19:31:58.828797   44141 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0906 19:31:58.828801   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828806   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0906 19:31:58.828812   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828816   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828825   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0906 19:31:58.828832   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0906 19:31:58.828838   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828842   44141 command_runner.go:130] >       "size": "61245718",
	I0906 19:31:58.828845   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828850   44141 command_runner.go:130] >       "username": "nonroot",
	I0906 19:31:58.828865   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828870   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828875   44141 command_runner.go:130] >     },
	I0906 19:31:58.828883   44141 command_runner.go:130] >     {
	I0906 19:31:58.828890   44141 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0906 19:31:58.828899   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828906   44141 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0906 19:31:58.828915   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828920   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828929   44141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0906 19:31:58.828935   44141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0906 19:31:58.828941   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828945   44141 command_runner.go:130] >       "size": "149009664",
	I0906 19:31:58.828957   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.828964   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.828972   44141 command_runner.go:130] >       },
	I0906 19:31:58.828979   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828988   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828995   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828998   44141 command_runner.go:130] >     },
	I0906 19:31:58.829001   44141 command_runner.go:130] >     {
	I0906 19:31:58.829007   44141 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0906 19:31:58.829013   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829018   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0906 19:31:58.829021   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829025   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829032   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0906 19:31:58.829041   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0906 19:31:58.829046   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829050   44141 command_runner.go:130] >       "size": "95233506",
	I0906 19:31:58.829056   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.829059   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.829063   44141 command_runner.go:130] >       },
	I0906 19:31:58.829067   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829073   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829076   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.829079   44141 command_runner.go:130] >     },
	I0906 19:31:58.829083   44141 command_runner.go:130] >     {
	I0906 19:31:58.829091   44141 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0906 19:31:58.829094   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829099   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0906 19:31:58.829105   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829108   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829129   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0906 19:31:58.829142   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0906 19:31:58.829145   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829149   44141 command_runner.go:130] >       "size": "89437512",
	I0906 19:31:58.829152   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.829156   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.829159   44141 command_runner.go:130] >       },
	I0906 19:31:58.829163   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829177   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829183   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.829186   44141 command_runner.go:130] >     },
	I0906 19:31:58.829189   44141 command_runner.go:130] >     {
	I0906 19:31:58.829195   44141 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0906 19:31:58.829199   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829203   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0906 19:31:58.829206   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829210   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829217   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0906 19:31:58.829223   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0906 19:31:58.829226   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829230   44141 command_runner.go:130] >       "size": "92728217",
	I0906 19:31:58.829234   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.829238   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829241   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829245   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.829249   44141 command_runner.go:130] >     },
	I0906 19:31:58.829253   44141 command_runner.go:130] >     {
	I0906 19:31:58.829262   44141 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0906 19:31:58.829267   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829274   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0906 19:31:58.829277   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829281   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829288   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0906 19:31:58.829297   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0906 19:31:58.829301   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829305   44141 command_runner.go:130] >       "size": "68420936",
	I0906 19:31:58.829311   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.829315   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.829318   44141 command_runner.go:130] >       },
	I0906 19:31:58.829322   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829326   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829330   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.829333   44141 command_runner.go:130] >     },
	I0906 19:31:58.829336   44141 command_runner.go:130] >     {
	I0906 19:31:58.829346   44141 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0906 19:31:58.829352   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829357   44141 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0906 19:31:58.829360   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829363   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829373   44141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0906 19:31:58.829382   44141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0906 19:31:58.829385   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829389   44141 command_runner.go:130] >       "size": "742080",
	I0906 19:31:58.829393   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.829397   44141 command_runner.go:130] >         "value": "65535"
	I0906 19:31:58.829403   44141 command_runner.go:130] >       },
	I0906 19:31:58.829407   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829413   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829417   44141 command_runner.go:130] >       "pinned": true
	I0906 19:31:58.829420   44141 command_runner.go:130] >     }
	I0906 19:31:58.829423   44141 command_runner.go:130] >   ]
	I0906 19:31:58.829426   44141 command_runner.go:130] > }
	I0906 19:31:58.830439   44141 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:31:58.830451   44141 crio.go:433] Images already preloaded, skipping extraction
	I0906 19:31:58.830490   44141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:31:58.861960   44141 command_runner.go:130] > {
	I0906 19:31:58.861979   44141 command_runner.go:130] >   "images": [
	I0906 19:31:58.861983   44141 command_runner.go:130] >     {
	I0906 19:31:58.861991   44141 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0906 19:31:58.861995   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862001   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0906 19:31:58.862004   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862008   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862019   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0906 19:31:58.862026   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0906 19:31:58.862030   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862034   44141 command_runner.go:130] >       "size": "87165492",
	I0906 19:31:58.862038   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862042   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862046   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862051   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862054   44141 command_runner.go:130] >     },
	I0906 19:31:58.862057   44141 command_runner.go:130] >     {
	I0906 19:31:58.862063   44141 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0906 19:31:58.862067   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862072   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0906 19:31:58.862079   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862082   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862090   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0906 19:31:58.862097   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0906 19:31:58.862106   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862111   44141 command_runner.go:130] >       "size": "87190579",
	I0906 19:31:58.862115   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862123   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862129   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862133   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862136   44141 command_runner.go:130] >     },
	I0906 19:31:58.862140   44141 command_runner.go:130] >     {
	I0906 19:31:58.862146   44141 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0906 19:31:58.862152   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862157   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0906 19:31:58.862161   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862164   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862171   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0906 19:31:58.862179   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0906 19:31:58.862182   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862187   44141 command_runner.go:130] >       "size": "1363676",
	I0906 19:31:58.862199   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862206   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862210   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862213   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862216   44141 command_runner.go:130] >     },
	I0906 19:31:58.862220   44141 command_runner.go:130] >     {
	I0906 19:31:58.862225   44141 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0906 19:31:58.862229   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862234   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0906 19:31:58.862240   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862244   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862252   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0906 19:31:58.862268   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0906 19:31:58.862273   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862277   44141 command_runner.go:130] >       "size": "31470524",
	I0906 19:31:58.862281   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862284   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862288   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862292   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862299   44141 command_runner.go:130] >     },
	I0906 19:31:58.862303   44141 command_runner.go:130] >     {
	I0906 19:31:58.862309   44141 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0906 19:31:58.862315   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862320   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0906 19:31:58.862323   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862329   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862337   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0906 19:31:58.862352   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0906 19:31:58.862357   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862361   44141 command_runner.go:130] >       "size": "61245718",
	I0906 19:31:58.862365   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862369   44141 command_runner.go:130] >       "username": "nonroot",
	I0906 19:31:58.862373   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862377   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862380   44141 command_runner.go:130] >     },
	I0906 19:31:58.862384   44141 command_runner.go:130] >     {
	I0906 19:31:58.862390   44141 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0906 19:31:58.862396   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862401   44141 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0906 19:31:58.862406   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862410   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862417   44141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0906 19:31:58.862426   44141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0906 19:31:58.862429   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862434   44141 command_runner.go:130] >       "size": "149009664",
	I0906 19:31:58.862439   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862443   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.862448   44141 command_runner.go:130] >       },
	I0906 19:31:58.862452   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862456   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862462   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862465   44141 command_runner.go:130] >     },
	I0906 19:31:58.862468   44141 command_runner.go:130] >     {
	I0906 19:31:58.862474   44141 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0906 19:31:58.862480   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862491   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0906 19:31:58.862497   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862508   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862517   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0906 19:31:58.862525   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0906 19:31:58.862528   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862532   44141 command_runner.go:130] >       "size": "95233506",
	I0906 19:31:58.862535   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862540   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.862549   44141 command_runner.go:130] >       },
	I0906 19:31:58.862553   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862556   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862560   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862563   44141 command_runner.go:130] >     },
	I0906 19:31:58.862567   44141 command_runner.go:130] >     {
	I0906 19:31:58.862572   44141 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0906 19:31:58.862578   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862583   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0906 19:31:58.862589   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862593   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862613   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0906 19:31:58.862623   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0906 19:31:58.862627   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862630   44141 command_runner.go:130] >       "size": "89437512",
	I0906 19:31:58.862634   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862638   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.862641   44141 command_runner.go:130] >       },
	I0906 19:31:58.862645   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862650   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862654   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862660   44141 command_runner.go:130] >     },
	I0906 19:31:58.862663   44141 command_runner.go:130] >     {
	I0906 19:31:58.862669   44141 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0906 19:31:58.862674   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862683   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0906 19:31:58.862689   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862697   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862706   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0906 19:31:58.862713   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0906 19:31:58.862718   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862723   44141 command_runner.go:130] >       "size": "92728217",
	I0906 19:31:58.862727   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862731   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862735   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862738   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862742   44141 command_runner.go:130] >     },
	I0906 19:31:58.862745   44141 command_runner.go:130] >     {
	I0906 19:31:58.862754   44141 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0906 19:31:58.862760   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862765   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0906 19:31:58.862771   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862774   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862782   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0906 19:31:58.862793   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0906 19:31:58.862798   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862801   44141 command_runner.go:130] >       "size": "68420936",
	I0906 19:31:58.862806   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862810   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.862813   44141 command_runner.go:130] >       },
	I0906 19:31:58.862817   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862821   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862825   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862829   44141 command_runner.go:130] >     },
	I0906 19:31:58.862832   44141 command_runner.go:130] >     {
	I0906 19:31:58.862838   44141 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0906 19:31:58.862845   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862849   44141 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0906 19:31:58.862852   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862856   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862863   44141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0906 19:31:58.862871   44141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0906 19:31:58.862875   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862885   44141 command_runner.go:130] >       "size": "742080",
	I0906 19:31:58.862889   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862893   44141 command_runner.go:130] >         "value": "65535"
	I0906 19:31:58.862899   44141 command_runner.go:130] >       },
	I0906 19:31:58.862903   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862906   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862910   44141 command_runner.go:130] >       "pinned": true
	I0906 19:31:58.862913   44141 command_runner.go:130] >     }
	I0906 19:31:58.862916   44141 command_runner.go:130] >   ]
	I0906 19:31:58.862922   44141 command_runner.go:130] > }
	I0906 19:31:58.863521   44141 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:31:58.863538   44141 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:31:58.863546   44141 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.31.0 crio true true} ...
	I0906 19:31:58.863631   44141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-002640 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:31:58.863688   44141 ssh_runner.go:195] Run: crio config
	I0906 19:31:58.895019   44141 command_runner.go:130] ! time="2024-09-06 19:31:58.864354224Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0906 19:31:58.901930   44141 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0906 19:31:58.908688   44141 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0906 19:31:58.908713   44141 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0906 19:31:58.908720   44141 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0906 19:31:58.908723   44141 command_runner.go:130] > #
	I0906 19:31:58.908730   44141 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0906 19:31:58.908736   44141 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0906 19:31:58.908742   44141 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0906 19:31:58.908751   44141 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0906 19:31:58.908756   44141 command_runner.go:130] > # reload'.
	I0906 19:31:58.908765   44141 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0906 19:31:58.908779   44141 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0906 19:31:58.908795   44141 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0906 19:31:58.908804   44141 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0906 19:31:58.908810   44141 command_runner.go:130] > [crio]
	I0906 19:31:58.908816   44141 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0906 19:31:58.908820   44141 command_runner.go:130] > # containers images, in this directory.
	I0906 19:31:58.908824   44141 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0906 19:31:58.908834   44141 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0906 19:31:58.908844   44141 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0906 19:31:58.908852   44141 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0906 19:31:58.908869   44141 command_runner.go:130] > # imagestore = ""
	I0906 19:31:58.908880   44141 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0906 19:31:58.908892   44141 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0906 19:31:58.908899   44141 command_runner.go:130] > storage_driver = "overlay"
	I0906 19:31:58.908906   44141 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0906 19:31:58.908914   44141 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0906 19:31:58.908919   44141 command_runner.go:130] > storage_option = [
	I0906 19:31:58.908923   44141 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0906 19:31:58.908935   44141 command_runner.go:130] > ]
	I0906 19:31:58.908944   44141 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0906 19:31:58.908950   44141 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0906 19:31:58.908957   44141 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0906 19:31:58.908962   44141 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0906 19:31:58.908968   44141 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0906 19:31:58.908975   44141 command_runner.go:130] > # always happen on a node reboot
	I0906 19:31:58.908979   44141 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0906 19:31:58.908996   44141 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0906 19:31:58.909004   44141 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0906 19:31:58.909009   44141 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0906 19:31:58.909014   44141 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0906 19:31:58.909020   44141 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0906 19:31:58.909028   44141 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0906 19:31:58.909033   44141 command_runner.go:130] > # internal_wipe = true
	I0906 19:31:58.909040   44141 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0906 19:31:58.909046   44141 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0906 19:31:58.909052   44141 command_runner.go:130] > # internal_repair = false
	I0906 19:31:58.909057   44141 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0906 19:31:58.909062   44141 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0906 19:31:58.909068   44141 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0906 19:31:58.909072   44141 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0906 19:31:58.909078   44141 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0906 19:31:58.909084   44141 command_runner.go:130] > [crio.api]
	I0906 19:31:58.909090   44141 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0906 19:31:58.909096   44141 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0906 19:31:58.909101   44141 command_runner.go:130] > # IP address on which the stream server will listen.
	I0906 19:31:58.909107   44141 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0906 19:31:58.909113   44141 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0906 19:31:58.909136   44141 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0906 19:31:58.909142   44141 command_runner.go:130] > # stream_port = "0"
	I0906 19:31:58.909148   44141 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0906 19:31:58.909152   44141 command_runner.go:130] > # stream_enable_tls = false
	I0906 19:31:58.909158   44141 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0906 19:31:58.909162   44141 command_runner.go:130] > # stream_idle_timeout = ""
	I0906 19:31:58.909168   44141 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0906 19:31:58.909181   44141 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0906 19:31:58.909187   44141 command_runner.go:130] > # minutes.
	I0906 19:31:58.909191   44141 command_runner.go:130] > # stream_tls_cert = ""
	I0906 19:31:58.909197   44141 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0906 19:31:58.909205   44141 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0906 19:31:58.909209   44141 command_runner.go:130] > # stream_tls_key = ""
	I0906 19:31:58.909215   44141 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0906 19:31:58.909221   44141 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0906 19:31:58.909242   44141 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0906 19:31:58.909248   44141 command_runner.go:130] > # stream_tls_ca = ""
	I0906 19:31:58.909255   44141 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0906 19:31:58.909262   44141 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0906 19:31:58.909269   44141 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0906 19:31:58.909273   44141 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0906 19:31:58.909279   44141 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0906 19:31:58.909286   44141 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0906 19:31:58.909290   44141 command_runner.go:130] > [crio.runtime]
	I0906 19:31:58.909295   44141 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0906 19:31:58.909304   44141 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0906 19:31:58.909308   44141 command_runner.go:130] > # "nofile=1024:2048"
	I0906 19:31:58.909314   44141 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0906 19:31:58.909319   44141 command_runner.go:130] > # default_ulimits = [
	I0906 19:31:58.909323   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909329   44141 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0906 19:31:58.909334   44141 command_runner.go:130] > # no_pivot = false
	I0906 19:31:58.909340   44141 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0906 19:31:58.909346   44141 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0906 19:31:58.909353   44141 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0906 19:31:58.909358   44141 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0906 19:31:58.909363   44141 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0906 19:31:58.909369   44141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0906 19:31:58.909375   44141 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0906 19:31:58.909379   44141 command_runner.go:130] > # Cgroup setting for conmon
	I0906 19:31:58.909386   44141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0906 19:31:58.909392   44141 command_runner.go:130] > conmon_cgroup = "pod"
	I0906 19:31:58.909398   44141 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0906 19:31:58.909409   44141 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0906 19:31:58.909418   44141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0906 19:31:58.909422   44141 command_runner.go:130] > conmon_env = [
	I0906 19:31:58.909430   44141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0906 19:31:58.909433   44141 command_runner.go:130] > ]
	I0906 19:31:58.909438   44141 command_runner.go:130] > # Additional environment variables to set for all the
	I0906 19:31:58.909445   44141 command_runner.go:130] > # containers. These are overridden if set in the
	I0906 19:31:58.909450   44141 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0906 19:31:58.909455   44141 command_runner.go:130] > # default_env = [
	I0906 19:31:58.909460   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909465   44141 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0906 19:31:58.909472   44141 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0906 19:31:58.909478   44141 command_runner.go:130] > # selinux = false
	I0906 19:31:58.909484   44141 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0906 19:31:58.909490   44141 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0906 19:31:58.909496   44141 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0906 19:31:58.909506   44141 command_runner.go:130] > # seccomp_profile = ""
	I0906 19:31:58.909511   44141 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0906 19:31:58.909518   44141 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0906 19:31:58.909524   44141 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0906 19:31:58.909531   44141 command_runner.go:130] > # which might increase security.
	I0906 19:31:58.909536   44141 command_runner.go:130] > # This option is currently deprecated,
	I0906 19:31:58.909541   44141 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0906 19:31:58.909547   44141 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0906 19:31:58.909553   44141 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0906 19:31:58.909559   44141 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0906 19:31:58.909567   44141 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0906 19:31:58.909573   44141 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0906 19:31:58.909580   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.909585   44141 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0906 19:31:58.909590   44141 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0906 19:31:58.909597   44141 command_runner.go:130] > # the cgroup blockio controller.
	I0906 19:31:58.909601   44141 command_runner.go:130] > # blockio_config_file = ""
	I0906 19:31:58.909607   44141 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0906 19:31:58.909612   44141 command_runner.go:130] > # blockio parameters.
	I0906 19:31:58.909616   44141 command_runner.go:130] > # blockio_reload = false
	I0906 19:31:58.909626   44141 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0906 19:31:58.909632   44141 command_runner.go:130] > # irqbalance daemon.
	I0906 19:31:58.909638   44141 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0906 19:31:58.909644   44141 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0906 19:31:58.909652   44141 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0906 19:31:58.909658   44141 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0906 19:31:58.909664   44141 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0906 19:31:58.909670   44141 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0906 19:31:58.909678   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.909682   44141 command_runner.go:130] > # rdt_config_file = ""
	I0906 19:31:58.909688   44141 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0906 19:31:58.909694   44141 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0906 19:31:58.909723   44141 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0906 19:31:58.909729   44141 command_runner.go:130] > # separate_pull_cgroup = ""
	I0906 19:31:58.909735   44141 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0906 19:31:58.909743   44141 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0906 19:31:58.909747   44141 command_runner.go:130] > # will be added.
	I0906 19:31:58.909750   44141 command_runner.go:130] > # default_capabilities = [
	I0906 19:31:58.909754   44141 command_runner.go:130] > # 	"CHOWN",
	I0906 19:31:58.909758   44141 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0906 19:31:58.909761   44141 command_runner.go:130] > # 	"FSETID",
	I0906 19:31:58.909765   44141 command_runner.go:130] > # 	"FOWNER",
	I0906 19:31:58.909768   44141 command_runner.go:130] > # 	"SETGID",
	I0906 19:31:58.909772   44141 command_runner.go:130] > # 	"SETUID",
	I0906 19:31:58.909778   44141 command_runner.go:130] > # 	"SETPCAP",
	I0906 19:31:58.909782   44141 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0906 19:31:58.909785   44141 command_runner.go:130] > # 	"KILL",
	I0906 19:31:58.909789   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909798   44141 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0906 19:31:58.909804   44141 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0906 19:31:58.909808   44141 command_runner.go:130] > # add_inheritable_capabilities = false
	I0906 19:31:58.909814   44141 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0906 19:31:58.909822   44141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0906 19:31:58.909826   44141 command_runner.go:130] > default_sysctls = [
	I0906 19:31:58.909830   44141 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0906 19:31:58.909836   44141 command_runner.go:130] > ]
	I0906 19:31:58.909845   44141 command_runner.go:130] > # List of devices on the host that a
	I0906 19:31:58.909853   44141 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0906 19:31:58.909857   44141 command_runner.go:130] > # allowed_devices = [
	I0906 19:31:58.909862   44141 command_runner.go:130] > # 	"/dev/fuse",
	I0906 19:31:58.909871   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909878   44141 command_runner.go:130] > # List of additional devices. specified as
	I0906 19:31:58.909885   44141 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0906 19:31:58.909892   44141 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0906 19:31:58.909898   44141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0906 19:31:58.909903   44141 command_runner.go:130] > # additional_devices = [
	I0906 19:31:58.909908   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909913   44141 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0906 19:31:58.909919   44141 command_runner.go:130] > # cdi_spec_dirs = [
	I0906 19:31:58.909923   44141 command_runner.go:130] > # 	"/etc/cdi",
	I0906 19:31:58.909928   44141 command_runner.go:130] > # 	"/var/run/cdi",
	I0906 19:31:58.909932   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909939   44141 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0906 19:31:58.909947   44141 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0906 19:31:58.909951   44141 command_runner.go:130] > # Defaults to false.
	I0906 19:31:58.909956   44141 command_runner.go:130] > # device_ownership_from_security_context = false
	I0906 19:31:58.909962   44141 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0906 19:31:58.909970   44141 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0906 19:31:58.909974   44141 command_runner.go:130] > # hooks_dir = [
	I0906 19:31:58.909980   44141 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0906 19:31:58.909984   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909992   44141 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0906 19:31:58.909998   44141 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0906 19:31:58.910005   44141 command_runner.go:130] > # its default mounts from the following two files:
	I0906 19:31:58.910008   44141 command_runner.go:130] > #
	I0906 19:31:58.910016   44141 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0906 19:31:58.910023   44141 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0906 19:31:58.910030   44141 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0906 19:31:58.910033   44141 command_runner.go:130] > #
	I0906 19:31:58.910039   44141 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0906 19:31:58.910047   44141 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0906 19:31:58.910053   44141 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0906 19:31:58.910065   44141 command_runner.go:130] > #      only add mounts it finds in this file.
	I0906 19:31:58.910070   44141 command_runner.go:130] > #
	I0906 19:31:58.910074   44141 command_runner.go:130] > # default_mounts_file = ""
	I0906 19:31:58.910079   44141 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0906 19:31:58.910086   44141 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0906 19:31:58.910092   44141 command_runner.go:130] > pids_limit = 1024
	I0906 19:31:58.910098   44141 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0906 19:31:58.910105   44141 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0906 19:31:58.910111   44141 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0906 19:31:58.910120   44141 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0906 19:31:58.910126   44141 command_runner.go:130] > # log_size_max = -1
	I0906 19:31:58.910135   44141 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0906 19:31:58.910141   44141 command_runner.go:130] > # log_to_journald = false
	I0906 19:31:58.910149   44141 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0906 19:31:58.910156   44141 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0906 19:31:58.910161   44141 command_runner.go:130] > # Path to directory for container attach sockets.
	I0906 19:31:58.910168   44141 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0906 19:31:58.910173   44141 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0906 19:31:58.910180   44141 command_runner.go:130] > # bind_mount_prefix = ""
	I0906 19:31:58.910185   44141 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0906 19:31:58.910191   44141 command_runner.go:130] > # read_only = false
	I0906 19:31:58.910196   44141 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0906 19:31:58.910204   44141 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0906 19:31:58.910208   44141 command_runner.go:130] > # live configuration reload.
	I0906 19:31:58.910213   44141 command_runner.go:130] > # log_level = "info"
	I0906 19:31:58.910218   44141 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0906 19:31:58.910226   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.910232   44141 command_runner.go:130] > # log_filter = ""
	I0906 19:31:58.910238   44141 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0906 19:31:58.910247   44141 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0906 19:31:58.910251   44141 command_runner.go:130] > # separated by comma.
	I0906 19:31:58.910258   44141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0906 19:31:58.910265   44141 command_runner.go:130] > # uid_mappings = ""
	I0906 19:31:58.910270   44141 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0906 19:31:58.910278   44141 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0906 19:31:58.910282   44141 command_runner.go:130] > # separated by comma.
	I0906 19:31:58.910297   44141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0906 19:31:58.910303   44141 command_runner.go:130] > # gid_mappings = ""
	I0906 19:31:58.910309   44141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0906 19:31:58.910315   44141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0906 19:31:58.910322   44141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0906 19:31:58.910329   44141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0906 19:31:58.910336   44141 command_runner.go:130] > # minimum_mappable_uid = -1
	I0906 19:31:58.910342   44141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0906 19:31:58.910350   44141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0906 19:31:58.910356   44141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0906 19:31:58.910364   44141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0906 19:31:58.910369   44141 command_runner.go:130] > # minimum_mappable_gid = -1
	I0906 19:31:58.910375   44141 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0906 19:31:58.910383   44141 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0906 19:31:58.910388   44141 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0906 19:31:58.910395   44141 command_runner.go:130] > # ctr_stop_timeout = 30
	I0906 19:31:58.910400   44141 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0906 19:31:58.910407   44141 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0906 19:31:58.910412   44141 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0906 19:31:58.910419   44141 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0906 19:31:58.910423   44141 command_runner.go:130] > drop_infra_ctr = false
	I0906 19:31:58.910431   44141 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0906 19:31:58.910437   44141 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0906 19:31:58.910445   44141 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0906 19:31:58.910450   44141 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0906 19:31:58.910457   44141 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0906 19:31:58.910469   44141 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0906 19:31:58.910477   44141 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0906 19:31:58.910482   44141 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0906 19:31:58.910488   44141 command_runner.go:130] > # shared_cpuset = ""
	I0906 19:31:58.910494   44141 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0906 19:31:58.910505   44141 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0906 19:31:58.910511   44141 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0906 19:31:58.910517   44141 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0906 19:31:58.910524   44141 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0906 19:31:58.910529   44141 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0906 19:31:58.910542   44141 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0906 19:31:58.910548   44141 command_runner.go:130] > # enable_criu_support = false
	I0906 19:31:58.910553   44141 command_runner.go:130] > # Enable/disable the generation of the container,
	I0906 19:31:58.910561   44141 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0906 19:31:58.910565   44141 command_runner.go:130] > # enable_pod_events = false
	I0906 19:31:58.910573   44141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0906 19:31:58.910579   44141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0906 19:31:58.910586   44141 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0906 19:31:58.910590   44141 command_runner.go:130] > # default_runtime = "runc"
	I0906 19:31:58.910595   44141 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0906 19:31:58.910604   44141 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0906 19:31:58.910613   44141 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0906 19:31:58.910620   44141 command_runner.go:130] > # creation as a file is not desired either.
	I0906 19:31:58.910628   44141 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0906 19:31:58.910635   44141 command_runner.go:130] > # the hostname is being managed dynamically.
	I0906 19:31:58.910640   44141 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0906 19:31:58.910645   44141 command_runner.go:130] > # ]
	I0906 19:31:58.910650   44141 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0906 19:31:58.910658   44141 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0906 19:31:58.910664   44141 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0906 19:31:58.910671   44141 command_runner.go:130] > # Each entry in the table should follow the format:
	I0906 19:31:58.910674   44141 command_runner.go:130] > #
	I0906 19:31:58.910682   44141 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0906 19:31:58.910690   44141 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0906 19:31:58.910773   44141 command_runner.go:130] > # runtime_type = "oci"
	I0906 19:31:58.910786   44141 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0906 19:31:58.910790   44141 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0906 19:31:58.910794   44141 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0906 19:31:58.910799   44141 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0906 19:31:58.910802   44141 command_runner.go:130] > # monitor_env = []
	I0906 19:31:58.910807   44141 command_runner.go:130] > # privileged_without_host_devices = false
	I0906 19:31:58.910813   44141 command_runner.go:130] > # allowed_annotations = []
	I0906 19:31:58.910819   44141 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0906 19:31:58.910824   44141 command_runner.go:130] > # Where:
	I0906 19:31:58.910829   44141 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0906 19:31:58.910837   44141 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0906 19:31:58.910847   44141 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0906 19:31:58.910855   44141 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0906 19:31:58.910859   44141 command_runner.go:130] > #   in $PATH.
	I0906 19:31:58.910867   44141 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0906 19:31:58.910872   44141 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0906 19:31:58.910878   44141 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0906 19:31:58.910882   44141 command_runner.go:130] > #   state.
	I0906 19:31:58.910889   44141 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0906 19:31:58.910897   44141 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0906 19:31:58.910902   44141 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0906 19:31:58.910909   44141 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0906 19:31:58.910915   44141 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0906 19:31:58.910923   44141 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0906 19:31:58.910930   44141 command_runner.go:130] > #   The currently recognized values are:
	I0906 19:31:58.910936   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0906 19:31:58.910945   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0906 19:31:58.910951   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0906 19:31:58.910958   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0906 19:31:58.910965   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0906 19:31:58.910977   44141 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0906 19:31:58.910983   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0906 19:31:58.910991   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0906 19:31:58.910997   44141 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0906 19:31:58.911005   44141 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0906 19:31:58.911009   44141 command_runner.go:130] > #   deprecated option "conmon".
	I0906 19:31:58.911019   44141 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0906 19:31:58.911026   44141 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0906 19:31:58.911032   44141 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0906 19:31:58.911039   44141 command_runner.go:130] > #   should be moved to the container's cgroup
	I0906 19:31:58.911045   44141 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0906 19:31:58.911052   44141 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0906 19:31:58.911058   44141 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0906 19:31:58.911065   44141 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0906 19:31:58.911069   44141 command_runner.go:130] > #
	I0906 19:31:58.911074   44141 command_runner.go:130] > # Using the seccomp notifier feature:
	I0906 19:31:58.911077   44141 command_runner.go:130] > #
	I0906 19:31:58.911088   44141 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0906 19:31:58.911096   44141 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0906 19:31:58.911102   44141 command_runner.go:130] > #
	I0906 19:31:58.911108   44141 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0906 19:31:58.911115   44141 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0906 19:31:58.911118   44141 command_runner.go:130] > #
	I0906 19:31:58.911124   44141 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0906 19:31:58.911130   44141 command_runner.go:130] > # feature.
	I0906 19:31:58.911133   44141 command_runner.go:130] > #
	I0906 19:31:58.911138   44141 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0906 19:31:58.911146   44141 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0906 19:31:58.911152   44141 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0906 19:31:58.911160   44141 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0906 19:31:58.911165   44141 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0906 19:31:58.911169   44141 command_runner.go:130] > #
	I0906 19:31:58.911175   44141 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0906 19:31:58.911183   44141 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0906 19:31:58.911186   44141 command_runner.go:130] > #
	I0906 19:31:58.911195   44141 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0906 19:31:58.911203   44141 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0906 19:31:58.911206   44141 command_runner.go:130] > #
	I0906 19:31:58.911214   44141 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0906 19:31:58.911220   44141 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0906 19:31:58.911223   44141 command_runner.go:130] > # limitation.
	I0906 19:31:58.911231   44141 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0906 19:31:58.911238   44141 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0906 19:31:58.911242   44141 command_runner.go:130] > runtime_type = "oci"
	I0906 19:31:58.911248   44141 command_runner.go:130] > runtime_root = "/run/runc"
	I0906 19:31:58.911252   44141 command_runner.go:130] > runtime_config_path = ""
	I0906 19:31:58.911256   44141 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0906 19:31:58.911263   44141 command_runner.go:130] > monitor_cgroup = "pod"
	I0906 19:31:58.911267   44141 command_runner.go:130] > monitor_exec_cgroup = ""
	I0906 19:31:58.911273   44141 command_runner.go:130] > monitor_env = [
	I0906 19:31:58.911279   44141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0906 19:31:58.911284   44141 command_runner.go:130] > ]
	I0906 19:31:58.911288   44141 command_runner.go:130] > privileged_without_host_devices = false
	I0906 19:31:58.911301   44141 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0906 19:31:58.911308   44141 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0906 19:31:58.911314   44141 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0906 19:31:58.911323   44141 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0906 19:31:58.911332   44141 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0906 19:31:58.911338   44141 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0906 19:31:58.911347   44141 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0906 19:31:58.911357   44141 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0906 19:31:58.911365   44141 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0906 19:31:58.911372   44141 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0906 19:31:58.911375   44141 command_runner.go:130] > # Example:
	I0906 19:31:58.911379   44141 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0906 19:31:58.911384   44141 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0906 19:31:58.911388   44141 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0906 19:31:58.911393   44141 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0906 19:31:58.911396   44141 command_runner.go:130] > # cpuset = 0
	I0906 19:31:58.911400   44141 command_runner.go:130] > # cpushares = "0-1"
	I0906 19:31:58.911403   44141 command_runner.go:130] > # Where:
	I0906 19:31:58.911407   44141 command_runner.go:130] > # The workload name is workload-type.
	I0906 19:31:58.911414   44141 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0906 19:31:58.911418   44141 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0906 19:31:58.911424   44141 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0906 19:31:58.911431   44141 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0906 19:31:58.911436   44141 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0906 19:31:58.911440   44141 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0906 19:31:58.911446   44141 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0906 19:31:58.911450   44141 command_runner.go:130] > # Default value is set to true
	I0906 19:31:58.911454   44141 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0906 19:31:58.911459   44141 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0906 19:31:58.911463   44141 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0906 19:31:58.911467   44141 command_runner.go:130] > # Default value is set to 'false'
	I0906 19:31:58.911471   44141 command_runner.go:130] > # disable_hostport_mapping = false
	I0906 19:31:58.911477   44141 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0906 19:31:58.911480   44141 command_runner.go:130] > #
	I0906 19:31:58.911485   44141 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0906 19:31:58.911490   44141 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0906 19:31:58.911504   44141 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0906 19:31:58.911510   44141 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0906 19:31:58.911515   44141 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0906 19:31:58.911518   44141 command_runner.go:130] > [crio.image]
	I0906 19:31:58.911528   44141 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0906 19:31:58.911532   44141 command_runner.go:130] > # default_transport = "docker://"
	I0906 19:31:58.911537   44141 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0906 19:31:58.911543   44141 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0906 19:31:58.911547   44141 command_runner.go:130] > # global_auth_file = ""
	I0906 19:31:58.911553   44141 command_runner.go:130] > # The image used to instantiate infra containers.
	I0906 19:31:58.911558   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.911565   44141 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0906 19:31:58.911571   44141 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0906 19:31:58.911579   44141 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0906 19:31:58.911584   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.911590   44141 command_runner.go:130] > # pause_image_auth_file = ""
	I0906 19:31:58.911596   44141 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0906 19:31:58.911604   44141 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0906 19:31:58.911611   44141 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0906 19:31:58.911617   44141 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0906 19:31:58.911622   44141 command_runner.go:130] > # pause_command = "/pause"
	I0906 19:31:58.911628   44141 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0906 19:31:58.911635   44141 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0906 19:31:58.911641   44141 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0906 19:31:58.911650   44141 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0906 19:31:58.911658   44141 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0906 19:31:58.911664   44141 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0906 19:31:58.911670   44141 command_runner.go:130] > # pinned_images = [
	I0906 19:31:58.911674   44141 command_runner.go:130] > # ]
	I0906 19:31:58.911682   44141 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0906 19:31:58.911688   44141 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0906 19:31:58.911696   44141 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0906 19:31:58.911702   44141 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0906 19:31:58.911709   44141 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0906 19:31:58.911713   44141 command_runner.go:130] > # signature_policy = ""
	I0906 19:31:58.911721   44141 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0906 19:31:58.911737   44141 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0906 19:31:58.911745   44141 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0906 19:31:58.911751   44141 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0906 19:31:58.911758   44141 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0906 19:31:58.911766   44141 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0906 19:31:58.911774   44141 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0906 19:31:58.911780   44141 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0906 19:31:58.911786   44141 command_runner.go:130] > # changing them here.
	I0906 19:31:58.911790   44141 command_runner.go:130] > # insecure_registries = [
	I0906 19:31:58.911795   44141 command_runner.go:130] > # ]
	I0906 19:31:58.911801   44141 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0906 19:31:58.911808   44141 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0906 19:31:58.911812   44141 command_runner.go:130] > # image_volumes = "mkdir"
	I0906 19:31:58.911819   44141 command_runner.go:130] > # Temporary directory to use for storing big files
	I0906 19:31:58.911824   44141 command_runner.go:130] > # big_files_temporary_dir = ""
	I0906 19:31:58.911832   44141 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0906 19:31:58.911836   44141 command_runner.go:130] > # CNI plugins.
	I0906 19:31:58.911839   44141 command_runner.go:130] > [crio.network]
	I0906 19:31:58.911844   44141 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0906 19:31:58.911852   44141 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0906 19:31:58.911856   44141 command_runner.go:130] > # cni_default_network = ""
	I0906 19:31:58.911863   44141 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0906 19:31:58.911868   44141 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0906 19:31:58.911876   44141 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0906 19:31:58.911880   44141 command_runner.go:130] > # plugin_dirs = [
	I0906 19:31:58.911886   44141 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0906 19:31:58.911889   44141 command_runner.go:130] > # ]
	I0906 19:31:58.911897   44141 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0906 19:31:58.911901   44141 command_runner.go:130] > [crio.metrics]
	I0906 19:31:58.911907   44141 command_runner.go:130] > # Globally enable or disable metrics support.
	I0906 19:31:58.911911   44141 command_runner.go:130] > enable_metrics = true
	I0906 19:31:58.911916   44141 command_runner.go:130] > # Specify enabled metrics collectors.
	I0906 19:31:58.911922   44141 command_runner.go:130] > # Per default all metrics are enabled.
	I0906 19:31:58.911928   44141 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0906 19:31:58.911936   44141 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0906 19:31:58.911942   44141 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0906 19:31:58.911952   44141 command_runner.go:130] > # metrics_collectors = [
	I0906 19:31:58.911958   44141 command_runner.go:130] > # 	"operations",
	I0906 19:31:58.911963   44141 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0906 19:31:58.911969   44141 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0906 19:31:58.911973   44141 command_runner.go:130] > # 	"operations_errors",
	I0906 19:31:58.911980   44141 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0906 19:31:58.911984   44141 command_runner.go:130] > # 	"image_pulls_by_name",
	I0906 19:31:58.911990   44141 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0906 19:31:58.911994   44141 command_runner.go:130] > # 	"image_pulls_failures",
	I0906 19:31:58.912001   44141 command_runner.go:130] > # 	"image_pulls_successes",
	I0906 19:31:58.912005   44141 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0906 19:31:58.912011   44141 command_runner.go:130] > # 	"image_layer_reuse",
	I0906 19:31:58.912015   44141 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0906 19:31:58.912019   44141 command_runner.go:130] > # 	"containers_oom_total",
	I0906 19:31:58.912023   44141 command_runner.go:130] > # 	"containers_oom",
	I0906 19:31:58.912027   44141 command_runner.go:130] > # 	"processes_defunct",
	I0906 19:31:58.912033   44141 command_runner.go:130] > # 	"operations_total",
	I0906 19:31:58.912037   44141 command_runner.go:130] > # 	"operations_latency_seconds",
	I0906 19:31:58.912044   44141 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0906 19:31:58.912048   44141 command_runner.go:130] > # 	"operations_errors_total",
	I0906 19:31:58.912054   44141 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0906 19:31:58.912058   44141 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0906 19:31:58.912065   44141 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0906 19:31:58.912069   44141 command_runner.go:130] > # 	"image_pulls_success_total",
	I0906 19:31:58.912076   44141 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0906 19:31:58.912080   44141 command_runner.go:130] > # 	"containers_oom_count_total",
	I0906 19:31:58.912091   44141 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0906 19:31:58.912098   44141 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0906 19:31:58.912101   44141 command_runner.go:130] > # ]
	I0906 19:31:58.912106   44141 command_runner.go:130] > # The port on which the metrics server will listen.
	I0906 19:31:58.912112   44141 command_runner.go:130] > # metrics_port = 9090
	I0906 19:31:58.912116   44141 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0906 19:31:58.912122   44141 command_runner.go:130] > # metrics_socket = ""
	I0906 19:31:58.912127   44141 command_runner.go:130] > # The certificate for the secure metrics server.
	I0906 19:31:58.912134   44141 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0906 19:31:58.912140   44141 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0906 19:31:58.912151   44141 command_runner.go:130] > # certificate on any modification event.
	I0906 19:31:58.912157   44141 command_runner.go:130] > # metrics_cert = ""
	I0906 19:31:58.912163   44141 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0906 19:31:58.912169   44141 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0906 19:31:58.912173   44141 command_runner.go:130] > # metrics_key = ""
	I0906 19:31:58.912181   44141 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0906 19:31:58.912184   44141 command_runner.go:130] > [crio.tracing]
	I0906 19:31:58.912190   44141 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0906 19:31:58.912195   44141 command_runner.go:130] > # enable_tracing = false
	I0906 19:31:58.912201   44141 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0906 19:31:58.912207   44141 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0906 19:31:58.912214   44141 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0906 19:31:58.912221   44141 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0906 19:31:58.912225   44141 command_runner.go:130] > # CRI-O NRI configuration.
	I0906 19:31:58.912230   44141 command_runner.go:130] > [crio.nri]
	I0906 19:31:58.912234   44141 command_runner.go:130] > # Globally enable or disable NRI.
	I0906 19:31:58.912238   44141 command_runner.go:130] > # enable_nri = false
	I0906 19:31:58.912243   44141 command_runner.go:130] > # NRI socket to listen on.
	I0906 19:31:58.912249   44141 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0906 19:31:58.912254   44141 command_runner.go:130] > # NRI plugin directory to use.
	I0906 19:31:58.912261   44141 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0906 19:31:58.912266   44141 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0906 19:31:58.912273   44141 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0906 19:31:58.912278   44141 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0906 19:31:58.912284   44141 command_runner.go:130] > # nri_disable_connections = false
	I0906 19:31:58.912290   44141 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0906 19:31:58.912296   44141 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0906 19:31:58.912302   44141 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0906 19:31:58.912308   44141 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0906 19:31:58.912314   44141 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0906 19:31:58.912320   44141 command_runner.go:130] > [crio.stats]
	I0906 19:31:58.912325   44141 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0906 19:31:58.912333   44141 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0906 19:31:58.912337   44141 command_runner.go:130] > # stats_collection_period = 0
	I0906 19:31:58.912484   44141 cni.go:84] Creating CNI manager for ""
	I0906 19:31:58.912503   44141 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 19:31:58.912518   44141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:31:58.912540   44141 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-002640 NodeName:multinode-002640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:31:58.912662   44141 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-002640"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:31:58.912717   44141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 19:31:58.924570   44141 command_runner.go:130] > kubeadm
	I0906 19:31:58.924592   44141 command_runner.go:130] > kubectl
	I0906 19:31:58.924598   44141 command_runner.go:130] > kubelet
	I0906 19:31:58.924651   44141 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:31:58.924696   44141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 19:31:58.935587   44141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0906 19:31:58.955494   44141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:31:58.973450   44141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0906 19:31:58.991811   44141 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0906 19:31:58.995626   44141 command_runner.go:130] > 192.168.39.11	control-plane.minikube.internal
	I0906 19:31:58.995740   44141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:31:59.142783   44141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:31:59.156973   44141 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640 for IP: 192.168.39.11
	I0906 19:31:59.156997   44141 certs.go:194] generating shared ca certs ...
	I0906 19:31:59.157011   44141 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:31:59.157165   44141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:31:59.157208   44141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:31:59.157218   44141 certs.go:256] generating profile certs ...
	I0906 19:31:59.157286   44141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/client.key
	I0906 19:31:59.157340   44141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.key.7a18bd90
	I0906 19:31:59.157375   44141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.key
	I0906 19:31:59.157383   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 19:31:59.157394   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 19:31:59.157404   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 19:31:59.157413   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 19:31:59.157423   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 19:31:59.157435   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 19:31:59.157448   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 19:31:59.157459   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 19:31:59.157505   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:31:59.157532   44141 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:31:59.157541   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:31:59.157572   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:31:59.157594   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:31:59.157621   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:31:59.157662   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:31:59.157687   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.157701   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.157714   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.158260   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:31:59.184010   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:31:59.208214   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:31:59.231717   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:31:59.255891   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 19:31:59.279017   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 19:31:59.302259   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:31:59.325499   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 19:31:59.350079   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:31:59.373448   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:31:59.396812   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:31:59.420141   44141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:31:59.436981   44141 ssh_runner.go:195] Run: openssl version
	I0906 19:31:59.442695   44141 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0906 19:31:59.442838   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:31:59.453510   44141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.457987   44141 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.458044   44141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.458094   44141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.463621   44141 command_runner.go:130] > 51391683
	I0906 19:31:59.463683   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:31:59.472912   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:31:59.483578   44141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.487907   44141 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.487950   44141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.487992   44141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.493550   44141 command_runner.go:130] > 3ec20f2e
	I0906 19:31:59.493601   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:31:59.502553   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:31:59.512956   44141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.517456   44141 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.517474   44141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.517553   44141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.523022   44141 command_runner.go:130] > b5213941
	I0906 19:31:59.523157   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:31:59.532459   44141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:31:59.537047   44141 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:31:59.537063   44141 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0906 19:31:59.537069   44141 command_runner.go:130] > Device: 253,1	Inode: 5244438     Links: 1
	I0906 19:31:59.537079   44141 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 19:31:59.537087   44141 command_runner.go:130] > Access: 2024-09-06 19:25:08.799732055 +0000
	I0906 19:31:59.537098   44141 command_runner.go:130] > Modify: 2024-09-06 19:25:08.799732055 +0000
	I0906 19:31:59.537108   44141 command_runner.go:130] > Change: 2024-09-06 19:25:08.799732055 +0000
	I0906 19:31:59.537115   44141 command_runner.go:130] >  Birth: 2024-09-06 19:25:08.799732055 +0000
	I0906 19:31:59.537305   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 19:31:59.542760   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.542828   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 19:31:59.548242   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.548301   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 19:31:59.553827   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.553889   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 19:31:59.559140   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.559195   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 19:31:59.564477   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.564608   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 19:31:59.570063   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.570131   44141 kubeadm.go:392] StartCluster: {Name:multinode-002640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fal
se kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:31:59.570282   44141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:31:59.570344   44141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:31:59.607628   44141 command_runner.go:130] > fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306
	I0906 19:31:59.607656   44141 command_runner.go:130] > e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2
	I0906 19:31:59.607665   44141 command_runner.go:130] > 9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb
	I0906 19:31:59.607675   44141 command_runner.go:130] > 7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809
	I0906 19:31:59.607684   44141 command_runner.go:130] > 826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34
	I0906 19:31:59.607692   44141 command_runner.go:130] > 9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef
	I0906 19:31:59.607698   44141 command_runner.go:130] > 3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a
	I0906 19:31:59.607707   44141 command_runner.go:130] > bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7
	I0906 19:31:59.607731   44141 cri.go:89] found id: "fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306"
	I0906 19:31:59.607740   44141 cri.go:89] found id: "e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2"
	I0906 19:31:59.607745   44141 cri.go:89] found id: "9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb"
	I0906 19:31:59.607751   44141 cri.go:89] found id: "7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809"
	I0906 19:31:59.607756   44141 cri.go:89] found id: "826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34"
	I0906 19:31:59.607762   44141 cri.go:89] found id: "9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef"
	I0906 19:31:59.607766   44141 cri.go:89] found id: "3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a"
	I0906 19:31:59.607771   44141 cri.go:89] found id: "bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7"
	I0906 19:31:59.607775   44141 cri.go:89] found id: ""
	I0906 19:31:59.607824   44141 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.299903275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a05785d-e3d8-4e09-b445-15daf8a3efce name=/runtime.v1.RuntimeService/Version
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.301176870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fea1e27-8c1d-4a76-b08b-f5b0354adde8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.301573884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651221301552008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fea1e27-8c1d-4a76-b08b-f5b0354adde8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.302252164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f02ed3d-d713-4d05-9c23-379ed0bb13db name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.302325312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f02ed3d-d713-4d05-9c23-379ed0bb13db name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.302734845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a724a02d1d40e5ce9a6c144791452eac8032625ce77e796e6b068cc6d4fee007,PodSandboxId:cd7c8d5d20cb949d0a2098c159e425b1465dc35176c1a5d93aa1339c250f4c73,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725651160604751317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683,PodSandboxId:03b3e59c782e6b60b4f8e91f6455fb0e2941b2609f6fd6788f8ea5bd8918e739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725651127221258601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8,PodSandboxId:f010aa4d0b2c7274691eaa560cf9055940d6a875a1d5fdb5ff88e77d7844b728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725651127046063004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709,PodSandboxId:b3e165b4e546623ec3427e09df49f0cd10951946f101d740a7bf921259c3d47a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725651127080014813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01df9f85f91f3c09c29cef210087503759c1831fe0be1940125ebb223e539050,PodSandboxId:5a8b4eb90966626a398ca99fd1152cca34f9e41d801268c263d0d6c1921e5293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651126927621833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b,PodSandboxId:5900e25fc73ed360f1a2251bf6e848b988a8ce0fdf9531282ccde92236bdbe73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725651122138720708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206,PodSandboxId:47a10633a7f270eabd5618c815faa446a3b32087337e8fe36269ec7e0e8b8860,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725651122141356596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b6109aa0b6cd8,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d,PodSandboxId:e0782b60f43d8c904a7f59d2495460ee84dd9e07a2517cc2a7b61868c16b1d9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725651122076797324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359,PodSandboxId:7979ed526635e3afb43b4cb6cc1c63bf3ac1c5b87214f3dde33816e5c92b31ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725651122080148992,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599f6b25cf823d6044aa665b8e9bc2c6e4faba8efe29ec35d087f566b823b714,PodSandboxId:3a30bbfc3e3618482edc9f0a89bfbd18207ef721b2c91b55ddb3ed5574527e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725650795294702195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306,PodSandboxId:b16fb217c884cf5a9d162f808828c1891087eed4d6f6e4e70fee289b8ae30cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725650738982961223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2,PodSandboxId:32dbc56fdd7f91261e21c840963d710e5dd3e0052be2b608a6ca7059bbd4eb1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725650737991455735,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb,PodSandboxId:a8c7e71e0bc17c9a9ca078b47a637b04fd21e07d445ed12ce103f8b84d71b55c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725650726219277305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809,PodSandboxId:fa6584d7fed346b8aac6705d4a51a92ff51ecb0742f8c84b41af16a5cbc9c0b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725650724248476769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34,PodSandboxId:6b861cae653621a5aedc6992414b7a1dd0b05af1d1c743cbc802cf5819174d6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725650713191454247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef,PodSandboxId:b923cc24dbfcfec6bbe3d71b1547e36b79b87194d8c7358f5b0e858f951d664a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725650713150043703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b
6109aa0b6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a,PodSandboxId:f2615377e2f23859cec623c64c4fa55073633dd556af9221cd1fccc9b3a9ebf1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725650713124164633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7,PodSandboxId:8d2a2c6dc681a6692ee267d420e70da248b9630bd9819e00b9e280468355a68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650713079407097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f02ed3d-d713-4d05-9c23-379ed0bb13db name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.348972048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ebd6841-dbde-4646-866b-ff6933ccb339 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.349042768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ebd6841-dbde-4646-866b-ff6933ccb339 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.356828468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6b7b39e-7a24-4c14-b8aa-a9c0da33b179 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.357239620Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651221357212522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6b7b39e-7a24-4c14-b8aa-a9c0da33b179 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.357891842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcba8793-1efc-46a0-a71f-b2dd0538b9c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.358029443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcba8793-1efc-46a0-a71f-b2dd0538b9c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.358417017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a724a02d1d40e5ce9a6c144791452eac8032625ce77e796e6b068cc6d4fee007,PodSandboxId:cd7c8d5d20cb949d0a2098c159e425b1465dc35176c1a5d93aa1339c250f4c73,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725651160604751317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683,PodSandboxId:03b3e59c782e6b60b4f8e91f6455fb0e2941b2609f6fd6788f8ea5bd8918e739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725651127221258601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8,PodSandboxId:f010aa4d0b2c7274691eaa560cf9055940d6a875a1d5fdb5ff88e77d7844b728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725651127046063004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709,PodSandboxId:b3e165b4e546623ec3427e09df49f0cd10951946f101d740a7bf921259c3d47a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725651127080014813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01df9f85f91f3c09c29cef210087503759c1831fe0be1940125ebb223e539050,PodSandboxId:5a8b4eb90966626a398ca99fd1152cca34f9e41d801268c263d0d6c1921e5293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651126927621833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b,PodSandboxId:5900e25fc73ed360f1a2251bf6e848b988a8ce0fdf9531282ccde92236bdbe73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725651122138720708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206,PodSandboxId:47a10633a7f270eabd5618c815faa446a3b32087337e8fe36269ec7e0e8b8860,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725651122141356596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b6109aa0b6cd8,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d,PodSandboxId:e0782b60f43d8c904a7f59d2495460ee84dd9e07a2517cc2a7b61868c16b1d9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725651122076797324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359,PodSandboxId:7979ed526635e3afb43b4cb6cc1c63bf3ac1c5b87214f3dde33816e5c92b31ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725651122080148992,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599f6b25cf823d6044aa665b8e9bc2c6e4faba8efe29ec35d087f566b823b714,PodSandboxId:3a30bbfc3e3618482edc9f0a89bfbd18207ef721b2c91b55ddb3ed5574527e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725650795294702195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306,PodSandboxId:b16fb217c884cf5a9d162f808828c1891087eed4d6f6e4e70fee289b8ae30cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725650738982961223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2,PodSandboxId:32dbc56fdd7f91261e21c840963d710e5dd3e0052be2b608a6ca7059bbd4eb1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725650737991455735,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb,PodSandboxId:a8c7e71e0bc17c9a9ca078b47a637b04fd21e07d445ed12ce103f8b84d71b55c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725650726219277305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809,PodSandboxId:fa6584d7fed346b8aac6705d4a51a92ff51ecb0742f8c84b41af16a5cbc9c0b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725650724248476769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34,PodSandboxId:6b861cae653621a5aedc6992414b7a1dd0b05af1d1c743cbc802cf5819174d6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725650713191454247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef,PodSandboxId:b923cc24dbfcfec6bbe3d71b1547e36b79b87194d8c7358f5b0e858f951d664a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725650713150043703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b
6109aa0b6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a,PodSandboxId:f2615377e2f23859cec623c64c4fa55073633dd556af9221cd1fccc9b3a9ebf1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725650713124164633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7,PodSandboxId:8d2a2c6dc681a6692ee267d420e70da248b9630bd9819e00b9e280468355a68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650713079407097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcba8793-1efc-46a0-a71f-b2dd0538b9c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.400522937Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0f24f62-80bb-47d1-9407-e056e9a58d13 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.400614032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0f24f62-80bb-47d1-9407-e056e9a58d13 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.401412081Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=230645d9-ee18-43e0-b907-d61e3fa3670e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.401898923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651221401876437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=230645d9-ee18-43e0-b907-d61e3fa3670e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.402342120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a680e627-73b3-4874-8065-74b2b5e787d6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.402413823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a680e627-73b3-4874-8065-74b2b5e787d6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.402835857Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a724a02d1d40e5ce9a6c144791452eac8032625ce77e796e6b068cc6d4fee007,PodSandboxId:cd7c8d5d20cb949d0a2098c159e425b1465dc35176c1a5d93aa1339c250f4c73,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725651160604751317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683,PodSandboxId:03b3e59c782e6b60b4f8e91f6455fb0e2941b2609f6fd6788f8ea5bd8918e739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725651127221258601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8,PodSandboxId:f010aa4d0b2c7274691eaa560cf9055940d6a875a1d5fdb5ff88e77d7844b728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725651127046063004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709,PodSandboxId:b3e165b4e546623ec3427e09df49f0cd10951946f101d740a7bf921259c3d47a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725651127080014813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01df9f85f91f3c09c29cef210087503759c1831fe0be1940125ebb223e539050,PodSandboxId:5a8b4eb90966626a398ca99fd1152cca34f9e41d801268c263d0d6c1921e5293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651126927621833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b,PodSandboxId:5900e25fc73ed360f1a2251bf6e848b988a8ce0fdf9531282ccde92236bdbe73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725651122138720708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206,PodSandboxId:47a10633a7f270eabd5618c815faa446a3b32087337e8fe36269ec7e0e8b8860,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725651122141356596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b6109aa0b6cd8,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d,PodSandboxId:e0782b60f43d8c904a7f59d2495460ee84dd9e07a2517cc2a7b61868c16b1d9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725651122076797324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359,PodSandboxId:7979ed526635e3afb43b4cb6cc1c63bf3ac1c5b87214f3dde33816e5c92b31ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725651122080148992,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599f6b25cf823d6044aa665b8e9bc2c6e4faba8efe29ec35d087f566b823b714,PodSandboxId:3a30bbfc3e3618482edc9f0a89bfbd18207ef721b2c91b55ddb3ed5574527e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725650795294702195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306,PodSandboxId:b16fb217c884cf5a9d162f808828c1891087eed4d6f6e4e70fee289b8ae30cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725650738982961223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2,PodSandboxId:32dbc56fdd7f91261e21c840963d710e5dd3e0052be2b608a6ca7059bbd4eb1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725650737991455735,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb,PodSandboxId:a8c7e71e0bc17c9a9ca078b47a637b04fd21e07d445ed12ce103f8b84d71b55c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725650726219277305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809,PodSandboxId:fa6584d7fed346b8aac6705d4a51a92ff51ecb0742f8c84b41af16a5cbc9c0b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725650724248476769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34,PodSandboxId:6b861cae653621a5aedc6992414b7a1dd0b05af1d1c743cbc802cf5819174d6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725650713191454247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef,PodSandboxId:b923cc24dbfcfec6bbe3d71b1547e36b79b87194d8c7358f5b0e858f951d664a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725650713150043703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b
6109aa0b6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a,PodSandboxId:f2615377e2f23859cec623c64c4fa55073633dd556af9221cd1fccc9b3a9ebf1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725650713124164633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7,PodSandboxId:8d2a2c6dc681a6692ee267d420e70da248b9630bd9819e00b9e280468355a68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650713079407097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a680e627-73b3-4874-8065-74b2b5e787d6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.421908396Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1b8a04d-3bd7-434f-a014-13f06c987a52 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.422183730Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cd7c8d5d20cb949d0a2098c159e425b1465dc35176c1a5d93aa1339c250f4c73,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-lmdp2,Uid:6c0be4d0-1c53-4144-ad7c-d806c021b7a9,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725651160482555756,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T19:32:06.344398604Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b3e165b4e546623ec3427e09df49f0cd10951946f101d740a7bf921259c3d47a,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-r9zn7,Uid:8fe242e6-a5a0-4da8-8772-bf1394fdc942,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1725651126789930729,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T19:32:06.344399668Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f010aa4d0b2c7274691eaa560cf9055940d6a875a1d5fdb5ff88e77d7844b728,Metadata:&PodSandboxMetadata{Name:kube-proxy-k2p8s,Uid:cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725651126718752280,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-09-06T19:32:06.344404078Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a8b4eb90966626a398ca99fd1152cca34f9e41d801268c263d0d6c1921e5293,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4bb70bf8-eeca-4508-a590-4e2c5aa927bf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725651126693530335,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"
/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-06T19:32:06.344396028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:03b3e59c782e6b60b4f8e91f6455fb0e2941b2609f6fd6788f8ea5bd8918e739,Metadata:&PodSandboxMetadata{Name:kindnet-6jxr2,Uid:96166804-e885-4f84-aecd-a0b3bda8337f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725651126686780794,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T19:32:06.344392110Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5900e25fc73ed360f1a2251bf6e848b988a8ce0fdf9531282ccde92236bdbe73,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-002640,Uid:fd16a4366ababa094dd9841805105e1f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725651121882819590,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fd16a4366ababa094dd9841805105e1f,kubernetes.io/config.seen: 2024-09-06T19:32:01.346506116Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47a10633a7f270eabd5618c815faa446a3b32087337e8fe36269ec7e0e8b8860,Metadata:&PodSandboxMetadata{Name:kube-controller-mana
ger-multinode-002640,Uid:4b4553ea27491810d24b6109aa0b6cd8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725651121869473965,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b6109aa0b6cd8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4b4553ea27491810d24b6109aa0b6cd8,kubernetes.io/config.seen: 2024-09-06T19:32:01.346502415Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e0782b60f43d8c904a7f59d2495460ee84dd9e07a2517cc2a7b61868c16b1d9c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-002640,Uid:2c279ebb7122d06252ca9a31d4f8602a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725651121867448069,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-0
02640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.11:8443,kubernetes.io/config.hash: 2c279ebb7122d06252ca9a31d4f8602a,kubernetes.io/config.seen: 2024-09-06T19:32:01.346508421Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7979ed526635e3afb43b4cb6cc1c63bf3ac1c5b87214f3dde33816e5c92b31ee,Metadata:&PodSandboxMetadata{Name:etcd-multinode-002640,Uid:270747f739d4ddf280a4a7ba1a5a608f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725651121856471099,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.11:2379,kubernete
s.io/config.hash: 270747f739d4ddf280a4a7ba1a5a608f,kubernetes.io/config.seen: 2024-09-06T19:32:01.346507264Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b1b8a04d-3bd7-434f-a014-13f06c987a52 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.422833936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9658954e-bc36-43ad-a132-d7245b09b496 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.422910723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9658954e-bc36-43ad-a132-d7245b09b496 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:33:41 multinode-002640 crio[2738]: time="2024-09-06 19:33:41.423111696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a724a02d1d40e5ce9a6c144791452eac8032625ce77e796e6b068cc6d4fee007,PodSandboxId:cd7c8d5d20cb949d0a2098c159e425b1465dc35176c1a5d93aa1339c250f4c73,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725651160604751317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683,PodSandboxId:03b3e59c782e6b60b4f8e91f6455fb0e2941b2609f6fd6788f8ea5bd8918e739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725651127221258601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8,PodSandboxId:f010aa4d0b2c7274691eaa560cf9055940d6a875a1d5fdb5ff88e77d7844b728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725651127046063004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709,PodSandboxId:b3e165b4e546623ec3427e09df49f0cd10951946f101d740a7bf921259c3d47a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725651127080014813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01df9f85f91f3c09c29cef210087503759c1831fe0be1940125ebb223e539050,PodSandboxId:5a8b4eb90966626a398ca99fd1152cca34f9e41d801268c263d0d6c1921e5293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651126927621833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b,PodSandboxId:5900e25fc73ed360f1a2251bf6e848b988a8ce0fdf9531282ccde92236bdbe73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725651122138720708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206,PodSandboxId:47a10633a7f270eabd5618c815faa446a3b32087337e8fe36269ec7e0e8b8860,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725651122141356596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b6109aa0b6cd8,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d,PodSandboxId:e0782b60f43d8c904a7f59d2495460ee84dd9e07a2517cc2a7b61868c16b1d9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725651122076797324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359,PodSandboxId:7979ed526635e3afb43b4cb6cc1c63bf3ac1c5b87214f3dde33816e5c92b31ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725651122080148992,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9658954e-bc36-43ad-a132-d7245b09b496 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a724a02d1d40e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   cd7c8d5d20cb9       busybox-7dff88458-lmdp2
	fa57d09d65ee9       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   03b3e59c782e6       kindnet-6jxr2
	9ff3fc7226613       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   b3e165b4e5466       coredns-6f6b679f8f-r9zn7
	60903012261f8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   f010aa4d0b2c7       kube-proxy-k2p8s
	01df9f85f91f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   5a8b4eb909666       storage-provisioner
	5b26eb0c19801       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   47a10633a7f27       kube-controller-manager-multinode-002640
	b2734c24e5585       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   5900e25fc73ed       kube-scheduler-multinode-002640
	e7c2f51ba024b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   7979ed526635e       etcd-multinode-002640
	f250ca60b2d27       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   e0782b60f43d8       kube-apiserver-multinode-002640
	599f6b25cf823       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   3a30bbfc3e361       busybox-7dff88458-lmdp2
	fefbece35c814       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   b16fb217c884c       coredns-6f6b679f8f-r9zn7
	e1baf3591f865       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   32dbc56fdd7f9       storage-provisioner
	9f4b7c0789cdb       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   a8c7e71e0bc17       kindnet-6jxr2
	7a97ccf9e25bd       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   fa6584d7fed34       kube-proxy-k2p8s
	826cb5eabec2d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   6b861cae65362       etcd-multinode-002640
	9457839bc33e5       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   b923cc24dbfcf       kube-controller-manager-multinode-002640
	3a7bc4e5358db       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   f2615377e2f23       kube-scheduler-multinode-002640
	bc1c460c83658       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   8d2a2c6dc681a       kube-apiserver-multinode-002640
	
	
	==> coredns [9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40355 - 44850 "HINFO IN 4530067088066444664.7866583828330175151. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016266096s
	
	
	==> coredns [fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306] <==
	[INFO] 10.244.0.3:37220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001842971s
	[INFO] 10.244.0.3:54089 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052774s
	[INFO] 10.244.0.3:54430 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175648s
	[INFO] 10.244.0.3:41666 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001312607s
	[INFO] 10.244.0.3:52046 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000037123s
	[INFO] 10.244.0.3:53322 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212003s
	[INFO] 10.244.0.3:40781 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034851s
	[INFO] 10.244.1.2:44728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150045s
	[INFO] 10.244.1.2:36902 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197259s
	[INFO] 10.244.1.2:37177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082454s
	[INFO] 10.244.1.2:43564 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086899s
	[INFO] 10.244.0.3:37835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141518s
	[INFO] 10.244.0.3:55848 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012575s
	[INFO] 10.244.0.3:57984 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075199s
	[INFO] 10.244.0.3:52551 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123644s
	[INFO] 10.244.1.2:59686 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117513s
	[INFO] 10.244.1.2:40137 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000291439s
	[INFO] 10.244.1.2:48575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014294s
	[INFO] 10.244.1.2:46149 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156939s
	[INFO] 10.244.0.3:38524 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207642s
	[INFO] 10.244.0.3:36093 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104122s
	[INFO] 10.244.0.3:33620 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077408s
	[INFO] 10.244.0.3:41967 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148167s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-002640
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-002640
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-002640
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T19_25_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:25:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-002640
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:33:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:32:05 +0000   Fri, 06 Sep 2024 19:25:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:32:05 +0000   Fri, 06 Sep 2024 19:25:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:32:05 +0000   Fri, 06 Sep 2024 19:25:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:32:05 +0000   Fri, 06 Sep 2024 19:25:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    multinode-002640
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3607ea6983d145bc88300af15ddf5220
	  System UUID:                3607ea69-83d1-45bc-8830-0af15ddf5220
	  Boot ID:                    3b3ddd88-c018-4c71-9ab0-6dfe28885d9c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lmdp2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 coredns-6f6b679f8f-r9zn7                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m18s
	  kube-system                 etcd-multinode-002640                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m23s
	  kube-system                 kindnet-6jxr2                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m18s
	  kube-system                 kube-apiserver-multinode-002640             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-controller-manager-multinode-002640    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-proxy-k2p8s                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-scheduler-multinode-002640             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m16s                kube-proxy       
	  Normal  Starting                 93s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m23s                kubelet          Node multinode-002640 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m23s                kubelet          Node multinode-002640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s                kubelet          Node multinode-002640 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m23s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m19s                node-controller  Node multinode-002640 event: Registered Node multinode-002640 in Controller
	  Normal  NodeReady                8m4s                 kubelet          Node multinode-002640 status is now: NodeReady
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s (x8 over 100s)  kubelet          Node multinode-002640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x8 over 100s)  kubelet          Node multinode-002640 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x7 over 100s)  kubelet          Node multinode-002640 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           93s                  node-controller  Node multinode-002640 event: Registered Node multinode-002640 in Controller
	
	
	Name:               multinode-002640-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-002640-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-002640
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T19_32_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:32:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-002640-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:33:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:33:14 +0000   Fri, 06 Sep 2024 19:32:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:33:14 +0000   Fri, 06 Sep 2024 19:32:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:33:14 +0000   Fri, 06 Sep 2024 19:32:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:33:14 +0000   Fri, 06 Sep 2024 19:33:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    multinode-002640-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f648484cde984cd4ab7f7b70a35d7214
	  System UUID:                f648484c-de98-4cd4-ab7f-7b70a35d7214
	  Boot ID:                    63abbd1b-aec2-430b-8dad-8475ed083e6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7qc4m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kindnet-7lg7n              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m30s
	  kube-system                 kube-proxy-8dfs6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 53s                    kube-proxy       
	  Normal  Starting                 7m24s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m30s (x2 over 7m30s)  kubelet          Node multinode-002640-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m30s (x2 over 7m30s)  kubelet          Node multinode-002640-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m30s (x2 over 7m30s)  kubelet          Node multinode-002640-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m11s                  kubelet          Node multinode-002640-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  58s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  57s (x2 over 58s)      kubelet          Node multinode-002640-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x2 over 58s)      kubelet          Node multinode-002640-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x2 over 58s)      kubelet          Node multinode-002640-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                    node-controller  Node multinode-002640-m02 event: Registered Node multinode-002640-m02 in Controller
	  Normal  NodeReady                40s                    kubelet          Node multinode-002640-m02 status is now: NodeReady
	
	
	Name:               multinode-002640-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-002640-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-002640
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T19_33_21_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:33:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-002640-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:33:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:33:38 +0000   Fri, 06 Sep 2024 19:33:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:33:38 +0000   Fri, 06 Sep 2024 19:33:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:33:38 +0000   Fri, 06 Sep 2024 19:33:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:33:38 +0000   Fri, 06 Sep 2024 19:33:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    multinode-002640-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e8f928131de4ec2b062d17f2db1146c
	  System UUID:                8e8f9281-31de-4ec2-b062-d17f2db1146c
	  Boot ID:                    08a8f6ca-e36c-452e-a32a-4775a3a49d72
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2hxnj       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m32s
	  kube-system                 kube-proxy-67k7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m27s                  kube-proxy  
	  Normal  Starting                 16s                    kube-proxy  
	  Normal  Starting                 5m41s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m32s (x2 over 6m32s)  kubelet     Node multinode-002640-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s (x2 over 6m32s)  kubelet     Node multinode-002640-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s (x2 over 6m32s)  kubelet     Node multinode-002640-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet     Node multinode-002640-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m44s (x2 over 5m45s)  kubelet     Node multinode-002640-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m45s)  kubelet     Node multinode-002640-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m44s (x2 over 5m45s)  kubelet     Node multinode-002640-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m28s                  kubelet     Node multinode-002640-m03 status is now: NodeReady
	  Normal  Starting                 21s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet     Node multinode-002640-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet     Node multinode-002640-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet     Node multinode-002640-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-002640-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062467] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.178574] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.148092] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.265951] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.928107] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +4.673964] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.061511] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.985333] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.089577] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.678389] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.101675] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.029032] kauditd_printk_skb: 60 callbacks suppressed
	[Sep 6 19:26] kauditd_printk_skb: 14 callbacks suppressed
	[Sep 6 19:31] systemd-fstab-generator[2662]: Ignoring "noauto" option for root device
	[  +0.160453] systemd-fstab-generator[2674]: Ignoring "noauto" option for root device
	[  +0.169870] systemd-fstab-generator[2689]: Ignoring "noauto" option for root device
	[  +0.132379] systemd-fstab-generator[2701]: Ignoring "noauto" option for root device
	[  +0.268073] systemd-fstab-generator[2729]: Ignoring "noauto" option for root device
	[  +7.861502] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.085312] kauditd_printk_skb: 100 callbacks suppressed
	[Sep 6 19:32] systemd-fstab-generator[2943]: Ignoring "noauto" option for root device
	[  +5.643569] kauditd_printk_skb: 74 callbacks suppressed
	[  +9.677317] kauditd_printk_skb: 34 callbacks suppressed
	[  +3.218143] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[ +20.869624] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34] <==
	{"level":"info","ts":"2024-09-06T19:25:14.530846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became leader at term 2"}
	{"level":"info","ts":"2024-09-06T19:25:14.530853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b546310005a4f8aa elected leader b546310005a4f8aa at term 2"}
	{"level":"info","ts":"2024-09-06T19:25:14.540791Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:25:14.542857Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b546310005a4f8aa","local-member-attributes":"{Name:multinode-002640 ClientURLs:[https://192.168.39.11:2379]}","request-path":"/0/members/b546310005a4f8aa/attributes","cluster-id":"7cea85d65aab3581","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:25:14.543074Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:25:14.543284Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cea85d65aab3581","local-member-id":"b546310005a4f8aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:25:14.546749Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:25:14.546821Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:25:14.547475Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:25:14.543399Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:25:14.545688Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:25:14.552464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.11:2379"}
	{"level":"info","ts":"2024-09-06T19:25:14.555328Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:25:14.559000Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:25:14.560117Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:30:19.249150Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-06T19:30:19.249289Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-002640","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.11:2380"],"advertise-client-urls":["https://192.168.39.11:2379"]}
	{"level":"warn","ts":"2024-09-06T19:30:19.249417Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:30:19.249512Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:30:19.331394Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.11:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:30:19.331445Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.11:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:30:19.332928Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b546310005a4f8aa","current-leader-member-id":"b546310005a4f8aa"}
	{"level":"info","ts":"2024-09-06T19:30:19.335232Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.11:2380"}
	{"level":"info","ts":"2024-09-06T19:30:19.335346Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.11:2380"}
	{"level":"info","ts":"2024-09-06T19:30:19.335355Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-002640","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.11:2380"],"advertise-client-urls":["https://192.168.39.11:2379"]}
	
	
	==> etcd [e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359] <==
	{"level":"info","ts":"2024-09-06T19:32:02.452853Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:32:02.436896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa switched to configuration voters=(13062181645399161002)"}
	{"level":"info","ts":"2024-09-06T19:32:02.453282Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7cea85d65aab3581","local-member-id":"b546310005a4f8aa","added-peer-id":"b546310005a4f8aa","added-peer-peer-urls":["https://192.168.39.11:2380"]}
	{"level":"info","ts":"2024-09-06T19:32:02.453451Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b546310005a4f8aa","initial-advertise-peer-urls":["https://192.168.39.11:2380"],"listen-peer-urls":["https://192.168.39.11:2380"],"advertise-client-urls":["https://192.168.39.11:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.11:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T19:32:02.453716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cea85d65aab3581","local-member-id":"b546310005a4f8aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:32:02.436340Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-06T19:32:02.455733Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T19:32:02.467386Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:32:04.191627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:32:04.191727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:32:04.191766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa received MsgPreVoteResp from b546310005a4f8aa at term 2"}
	{"level":"info","ts":"2024-09-06T19:32:04.191786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:32:04.191792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa received MsgVoteResp from b546310005a4f8aa at term 3"}
	{"level":"info","ts":"2024-09-06T19:32:04.191800Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became leader at term 3"}
	{"level":"info","ts":"2024-09-06T19:32:04.191808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b546310005a4f8aa elected leader b546310005a4f8aa at term 3"}
	{"level":"info","ts":"2024-09-06T19:32:04.197436Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b546310005a4f8aa","local-member-attributes":"{Name:multinode-002640 ClientURLs:[https://192.168.39.11:2379]}","request-path":"/0/members/b546310005a4f8aa/attributes","cluster-id":"7cea85d65aab3581","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:32:04.197451Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:32:04.197684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:32:04.198088Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:32:04.198118Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:32:04.198953Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:32:04.200490Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.11:2379"}
	{"level":"info","ts":"2024-09-06T19:32:04.199254Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:32:04.201871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:33:27.159319Z","caller":"traceutil/trace.go:171","msg":"trace[788905189] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"148.979422ms","start":"2024-09-06T19:33:27.010303Z","end":"2024-09-06T19:33:27.159283Z","steps":["trace[788905189] 'process raft request'  (duration: 104.564745ms)","trace[788905189] 'compare'  (duration: 43.963352ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:33:41 up 9 min,  0 users,  load average: 0.18, 0.22, 0.12
	Linux multinode-002640 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb] <==
	I0906 19:29:37.200492       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:29:47.193593       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:29:47.193774       1 main.go:299] handling current node
	I0906 19:29:47.193811       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:29:47.193831       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:29:47.193964       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:29:47.193987       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:29:57.194072       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:29:57.194108       1 main.go:299] handling current node
	I0906 19:29:57.194129       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:29:57.194134       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:29:57.194275       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:29:57.194302       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:30:07.201163       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:30:07.201283       1 main.go:299] handling current node
	I0906 19:30:07.201314       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:30:07.201333       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:30:07.201476       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:30:07.201507       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:30:17.193348       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:30:17.193425       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:30:17.193582       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:30:17.193614       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:30:17.193752       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:30:17.193785       1 main.go:299] handling current node
	
	
	==> kindnet [fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683] <==
	I0906 19:32:58.390550       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:33:08.390385       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:33:08.390526       1 main.go:299] handling current node
	I0906 19:33:08.390553       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:33:08.390561       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:33:08.390790       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:33:08.390825       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:33:18.391735       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:33:18.391812       1 main.go:299] handling current node
	I0906 19:33:18.391841       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:33:18.391852       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:33:18.392104       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:33:18.392139       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:33:28.392127       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:33:28.392286       1 main.go:299] handling current node
	I0906 19:33:28.392342       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:33:28.392430       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:33:28.392740       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:33:28.392804       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.2.0/24] 
	I0906 19:33:38.391378       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:33:38.391474       1 main.go:299] handling current node
	I0906 19:33:38.391501       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:33:38.391519       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:33:38.391816       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:33:38.391862       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7] <==
	I0906 19:25:23.479935       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0906 19:25:23.580162       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0906 19:26:38.079366       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:43940: use of closed network connection
	E0906 19:26:38.292861       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:43954: use of closed network connection
	E0906 19:26:38.482862       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:43962: use of closed network connection
	E0906 19:26:38.648600       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:43988: use of closed network connection
	E0906 19:26:38.969711       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44040: use of closed network connection
	E0906 19:26:39.258048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44064: use of closed network connection
	E0906 19:26:39.432522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44090: use of closed network connection
	E0906 19:26:39.601921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44106: use of closed network connection
	E0906 19:26:39.773220       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44112: use of closed network connection
	I0906 19:30:19.247576       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0906 19:30:19.262288       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.263950       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0906 19:30:19.266458       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0906 19:30:19.268732       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.269022       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.269738       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.270357       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.275548       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.276136       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.276202       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.276259       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.276294       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.284942       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d] <==
	I0906 19:32:05.530037       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0906 19:32:05.530997       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:32:05.531268       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:32:05.531301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:32:05.531459       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:32:05.537069       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:32:05.541034       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:32:05.541071       1 policy_source.go:224] refreshing policies
	I0906 19:32:05.553147       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:32:05.554996       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:32:05.556125       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:32:05.556218       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:32:05.556234       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:32:05.556240       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:32:05.556245       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:32:05.557993       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E0906 19:32:05.576770       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0906 19:32:06.441716       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 19:32:07.984460       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 19:32:08.122877       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0906 19:32:08.133468       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0906 19:32:08.208286       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:32:08.214182       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 19:32:08.965558       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:32:09.143843       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206] <==
	I0906 19:33:01.753008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:33:01.759266       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="79.964µs"
	I0906 19:33:01.775090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.72µs"
	I0906 19:33:03.999758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:33:04.626333       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.392719ms"
	I0906 19:33:04.628826       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="2.267139ms"
	I0906 19:33:14.401124       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:33:19.447414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:19.468619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:19.702796       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:33:19.703106       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:20.797360       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-002640-m03\" does not exist"
	I0906 19:33:20.798695       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:33:20.812030       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-002640-m03" podCIDRs=["10.244.2.0/24"]
	I0906 19:33:20.812118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:20.812182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:20.816416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:21.210307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:21.552152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:24.042783       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:30.828477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:38.466439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:38.466755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:33:38.479236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:39.017738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	
	
	==> kube-controller-manager [9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef] <==
	I0906 19:27:55.740526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:55.975411       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:55.975526       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:27:57.133170       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:27:57.134534       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-002640-m03\" does not exist"
	I0906 19:27:57.152172       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-002640-m03" podCIDRs=["10.244.3.0/24"]
	I0906 19:27:57.152277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:57.152467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:57.500834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:57.689831       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:57.857139       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:07.480698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:13.838283       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:28:13.838454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:13.849307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:17.600057       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:57.616554       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m03"
	I0906 19:28:57.617201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:28:57.633167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:28:57.673162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.94992ms"
	I0906 19:28:57.674256       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.173µs"
	I0906 19:29:02.674959       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:29:02.694620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:29:02.720893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:29:12.800012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	
	
	==> kube-proxy [60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:32:07.497741       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:32:07.579205       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.11"]
	E0906 19:32:07.579277       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:32:07.716683       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:32:07.716722       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:32:07.716755       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:32:07.725815       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:32:07.726270       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:32:07.726362       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:32:07.730459       1 config.go:197] "Starting service config controller"
	I0906 19:32:07.730601       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:32:07.730712       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:32:07.730758       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:32:07.731312       1 config.go:326] "Starting node config controller"
	I0906 19:32:07.731403       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:32:07.832827       1 shared_informer.go:320] Caches are synced for node config
	I0906 19:32:07.832871       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:32:07.832898       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:25:24.674371       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:25:24.685511       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.11"]
	E0906 19:25:24.685757       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:25:24.756918       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:25:24.756977       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:25:24.757017       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:25:24.763036       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:25:24.763358       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:25:24.763389       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:25:24.765141       1 config.go:197] "Starting service config controller"
	I0906 19:25:24.765181       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:25:24.765199       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:25:24.765204       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:25:24.766117       1 config.go:326] "Starting node config controller"
	I0906 19:25:24.766128       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:25:24.866074       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:25:24.866089       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:25:24.866206       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a] <==
	W0906 19:25:15.844826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 19:25:15.844908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:15.844927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:25:15.847413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0906 19:25:15.847166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.739464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 19:25:16.739493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.761143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:25:16.761266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.839961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 19:25:16.840061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.859516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 19:25:16.859758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.905833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 19:25:16.906202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.998843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 19:25:16.998986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:17.028199       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 19:25:17.028329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:17.067222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 19:25:17.067359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:17.101839       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 19:25:17.101960       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 19:25:20.322444       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 19:30:19.248571       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b] <==
	I0906 19:32:03.182191       1 serving.go:386] Generated self-signed cert in-memory
	W0906 19:32:05.480211       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:32:05.480237       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:32:05.480247       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:32:05.480257       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:32:05.539251       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 19:32:05.541732       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:32:05.562858       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 19:32:05.563140       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:32:05.563231       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:32:05.563345       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:32:05.663939       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 19:32:11 multinode-002640 kubelet[2950]: E0906 19:32:11.429840    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651131428407073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:32:16 multinode-002640 kubelet[2950]: I0906 19:32:16.424899    2950 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 06 19:32:21 multinode-002640 kubelet[2950]: E0906 19:32:21.431780    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651141431461686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:32:21 multinode-002640 kubelet[2950]: E0906 19:32:21.432143    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651141431461686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:32:31 multinode-002640 kubelet[2950]: E0906 19:32:31.435505    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651151433622299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:32:31 multinode-002640 kubelet[2950]: E0906 19:32:31.435527    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651151433622299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:32:41 multinode-002640 kubelet[2950]: E0906 19:32:41.437110    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651161436403837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:32:41 multinode-002640 kubelet[2950]: E0906 19:32:41.437189    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651161436403837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:32:51 multinode-002640 kubelet[2950]: E0906 19:32:51.439916    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651171439190782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:32:51 multinode-002640 kubelet[2950]: E0906 19:32:51.439950    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651171439190782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:01 multinode-002640 kubelet[2950]: E0906 19:33:01.441863    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651181441355420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:01 multinode-002640 kubelet[2950]: E0906 19:33:01.441887    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651181441355420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:01 multinode-002640 kubelet[2950]: E0906 19:33:01.455122    2950 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:33:01 multinode-002640 kubelet[2950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:33:01 multinode-002640 kubelet[2950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:33:01 multinode-002640 kubelet[2950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:33:01 multinode-002640 kubelet[2950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:33:11 multinode-002640 kubelet[2950]: E0906 19:33:11.444219    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651191443870285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:11 multinode-002640 kubelet[2950]: E0906 19:33:11.444288    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651191443870285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:21 multinode-002640 kubelet[2950]: E0906 19:33:21.453895    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651201452513731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:21 multinode-002640 kubelet[2950]: E0906 19:33:21.453941    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651201452513731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:31 multinode-002640 kubelet[2950]: E0906 19:33:31.456821    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651211456263477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:31 multinode-002640 kubelet[2950]: E0906 19:33:31.456870    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651211456263477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:41 multinode-002640 kubelet[2950]: E0906 19:33:41.459954    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651221458358431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:33:41 multinode-002640 kubelet[2950]: E0906 19:33:41.459985    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651221458358431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:33:40.985389   45285 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19576-6021/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-002640 -n multinode-002640
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-002640 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (326.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 stop
E0906 19:34:49.187920   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-002640 stop: exit status 82 (2m0.448537544s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-002640-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-002640 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-002640 status: exit status 3 (18.826437575s)

                                                
                                                
-- stdout --
	multinode-002640
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-002640-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:36:04.233156   45947 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.12:22: connect: no route to host
	E0906 19:36:04.233192   45947 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.12:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-002640 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-002640 -n multinode-002640
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-002640 logs -n 25: (1.390873732s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m02:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640:/home/docker/cp-test_multinode-002640-m02_multinode-002640.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640 sudo cat                                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m02_multinode-002640.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m02:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03:/home/docker/cp-test_multinode-002640-m02_multinode-002640-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640-m03 sudo cat                                   | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m02_multinode-002640-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp testdata/cp-test.txt                                                | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3017084892/001/cp-test_multinode-002640-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640:/home/docker/cp-test_multinode-002640-m03_multinode-002640.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640 sudo cat                                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m03_multinode-002640.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt                       | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02:/home/docker/cp-test_multinode-002640-m03_multinode-002640-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640-m02 sudo cat                                   | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m03_multinode-002640-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-002640 node stop m03                                                          | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	| node    | multinode-002640 node start                                                             | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:28 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-002640                                                                | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:28 UTC |                     |
	| stop    | -p multinode-002640                                                                     | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:28 UTC |                     |
	| start   | -p multinode-002640                                                                     | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:30 UTC | 06 Sep 24 19:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-002640                                                                | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:33 UTC |                     |
	| node    | multinode-002640 node delete                                                            | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:33 UTC | 06 Sep 24 19:33 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-002640 stop                                                                   | multinode-002640 | jenkins | v1.34.0 | 06 Sep 24 19:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 19:30:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:30:18.359400   44141 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:30:18.359637   44141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:30:18.359645   44141 out.go:358] Setting ErrFile to fd 2...
	I0906 19:30:18.359649   44141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:30:18.359820   44141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:30:18.360332   44141 out.go:352] Setting JSON to false
	I0906 19:30:18.361217   44141 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4367,"bootTime":1725646651,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:30:18.361275   44141 start.go:139] virtualization: kvm guest
	I0906 19:30:18.363247   44141 out.go:177] * [multinode-002640] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:30:18.364505   44141 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:30:18.364509   44141 notify.go:220] Checking for updates...
	I0906 19:30:18.366816   44141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:30:18.367983   44141 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:30:18.369023   44141 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:30:18.370154   44141 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:30:18.371280   44141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:30:18.372843   44141 config.go:182] Loaded profile config "multinode-002640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:30:18.372952   44141 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:30:18.373382   44141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:30:18.373458   44141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:30:18.388035   44141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I0906 19:30:18.388451   44141 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:30:18.389002   44141 main.go:141] libmachine: Using API Version  1
	I0906 19:30:18.389022   44141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:30:18.389364   44141 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:30:18.389581   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:30:18.424352   44141 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 19:30:18.425395   44141 start.go:297] selected driver: kvm2
	I0906 19:30:18.425410   44141 start.go:901] validating driver "kvm2" against &{Name:multinode-002640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-p
rovisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:30:18.425603   44141 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:30:18.425962   44141 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:30:18.426034   44141 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:30:18.440182   44141 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:30:18.441134   44141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:30:18.441175   44141 cni.go:84] Creating CNI manager for ""
	I0906 19:30:18.441183   44141 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 19:30:18.441255   44141 start.go:340] cluster config:
	{Name:multinode-002640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false
metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:30:18.441412   44141 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:30:18.443422   44141 out.go:177] * Starting "multinode-002640" primary control-plane node in "multinode-002640" cluster
	I0906 19:30:18.444566   44141 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:30:18.444601   44141 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:30:18.444611   44141 cache.go:56] Caching tarball of preloaded images
	I0906 19:30:18.444686   44141 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:30:18.444699   44141 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:30:18.444816   44141 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/config.json ...
	I0906 19:30:18.445030   44141 start.go:360] acquireMachinesLock for multinode-002640: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:30:18.445072   44141 start.go:364] duration metric: took 24.266µs to acquireMachinesLock for "multinode-002640"
	I0906 19:30:18.445085   44141 start.go:96] Skipping create...Using existing machine configuration
	I0906 19:30:18.445090   44141 fix.go:54] fixHost starting: 
	I0906 19:30:18.445356   44141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:30:18.445391   44141 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:30:18.460181   44141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46333
	I0906 19:30:18.460569   44141 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:30:18.461023   44141 main.go:141] libmachine: Using API Version  1
	I0906 19:30:18.461049   44141 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:30:18.461394   44141 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:30:18.461583   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:30:18.461723   44141 main.go:141] libmachine: (multinode-002640) Calling .GetState
	I0906 19:30:18.463405   44141 fix.go:112] recreateIfNeeded on multinode-002640: state=Running err=<nil>
	W0906 19:30:18.463432   44141 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 19:30:18.465309   44141 out.go:177] * Updating the running kvm2 "multinode-002640" VM ...
	I0906 19:30:18.466360   44141 machine.go:93] provisionDockerMachine start ...
	I0906 19:30:18.466381   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:30:18.466601   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.469095   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.469520   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.469555   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.469730   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:18.469886   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.470027   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.470193   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:18.470403   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:30:18.470654   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:30:18.470673   44141 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 19:30:18.582051   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-002640
	
	I0906 19:30:18.582080   44141 main.go:141] libmachine: (multinode-002640) Calling .GetMachineName
	I0906 19:30:18.582357   44141 buildroot.go:166] provisioning hostname "multinode-002640"
	I0906 19:30:18.582381   44141 main.go:141] libmachine: (multinode-002640) Calling .GetMachineName
	I0906 19:30:18.582571   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.585086   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.585436   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.585458   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.585569   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:18.585716   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.585869   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.585986   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:18.586119   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:30:18.586311   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:30:18.586333   44141 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-002640 && echo "multinode-002640" | sudo tee /etc/hostname
	I0906 19:30:18.708399   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-002640
	
	I0906 19:30:18.708425   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.711246   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.711583   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.711634   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.711911   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:18.712093   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.712283   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.712481   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:18.712646   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:30:18.712886   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:30:18.712912   44141 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-002640' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-002640/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-002640' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:30:18.822640   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:30:18.822669   44141 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:30:18.822685   44141 buildroot.go:174] setting up certificates
	I0906 19:30:18.822693   44141 provision.go:84] configureAuth start
	I0906 19:30:18.822700   44141 main.go:141] libmachine: (multinode-002640) Calling .GetMachineName
	I0906 19:30:18.822970   44141 main.go:141] libmachine: (multinode-002640) Calling .GetIP
	I0906 19:30:18.825909   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.826423   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.826443   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.826650   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.829103   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.829463   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.829498   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.829665   44141 provision.go:143] copyHostCerts
	I0906 19:30:18.829697   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:30:18.829737   44141 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:30:18.829757   44141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:30:18.829837   44141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:30:18.829949   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:30:18.829975   44141 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:30:18.829982   44141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:30:18.830026   44141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:30:18.830105   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:30:18.830137   44141 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:30:18.830146   44141 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:30:18.830186   44141 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:30:18.830268   44141 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.multinode-002640 san=[127.0.0.1 192.168.39.11 localhost minikube multinode-002640]
	I0906 19:30:18.958949   44141 provision.go:177] copyRemoteCerts
	I0906 19:30:18.959011   44141 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:30:18.959050   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:18.961879   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.962204   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:18.962229   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:18.962450   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:18.962633   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:18.962823   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:18.962934   44141 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640/id_rsa Username:docker}
	I0906 19:30:19.049693   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0906 19:30:19.049772   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:30:19.078166   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0906 19:30:19.078232   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0906 19:30:19.106308   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0906 19:30:19.106382   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 19:30:19.132542   44141 provision.go:87] duration metric: took 309.840007ms to configureAuth
	I0906 19:30:19.132572   44141 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:30:19.132780   44141 config.go:182] Loaded profile config "multinode-002640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:30:19.132842   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:30:19.135345   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:19.135706   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:30:19.135748   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:30:19.135889   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:30:19.136060   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:19.136241   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:30:19.136382   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:30:19.136538   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:30:19.136707   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:30:19.136721   44141 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:31:49.794349   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:31:49.794375   44141 machine.go:96] duration metric: took 1m31.328001388s to provisionDockerMachine
	I0906 19:31:49.794388   44141 start.go:293] postStartSetup for "multinode-002640" (driver="kvm2")
	I0906 19:31:49.794399   44141 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:31:49.794416   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:49.794763   44141 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:31:49.794798   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:31:49.798045   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:49.798523   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:49.798546   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:49.798760   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:31:49.798953   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:49.799104   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:31:49.799242   44141 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640/id_rsa Username:docker}
	I0906 19:31:49.884200   44141 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:31:49.888428   44141 command_runner.go:130] > NAME=Buildroot
	I0906 19:31:49.888451   44141 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0906 19:31:49.888458   44141 command_runner.go:130] > ID=buildroot
	I0906 19:31:49.888465   44141 command_runner.go:130] > VERSION_ID=2023.02.9
	I0906 19:31:49.888477   44141 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0906 19:31:49.888512   44141 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:31:49.888531   44141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:31:49.888584   44141 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:31:49.888661   44141 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:31:49.888669   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /etc/ssl/certs/131782.pem
	I0906 19:31:49.888745   44141 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:31:49.899140   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:31:49.923154   44141 start.go:296] duration metric: took 128.75305ms for postStartSetup
	I0906 19:31:49.923203   44141 fix.go:56] duration metric: took 1m31.478112603s for fixHost
	I0906 19:31:49.923226   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:31:49.925945   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:49.926297   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:49.926321   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:49.926472   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:31:49.926683   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:49.926873   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:49.927016   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:31:49.927159   44141 main.go:141] libmachine: Using SSH client type: native
	I0906 19:31:49.927372   44141 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I0906 19:31:49.927384   44141 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:31:50.037836   44141 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725651110.007504213
	
	I0906 19:31:50.037858   44141 fix.go:216] guest clock: 1725651110.007504213
	I0906 19:31:50.037865   44141 fix.go:229] Guest: 2024-09-06 19:31:50.007504213 +0000 UTC Remote: 2024-09-06 19:31:49.923208502 +0000 UTC m=+91.597491316 (delta=84.295711ms)
	I0906 19:31:50.037883   44141 fix.go:200] guest clock delta is within tolerance: 84.295711ms
	I0906 19:31:50.037887   44141 start.go:83] releasing machines lock for "multinode-002640", held for 1m31.592808597s
	I0906 19:31:50.037904   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:50.038179   44141 main.go:141] libmachine: (multinode-002640) Calling .GetIP
	I0906 19:31:50.041081   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.041525   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:50.041554   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.041660   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:50.042202   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:50.042382   44141 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:31:50.042488   44141 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:31:50.042526   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:31:50.042634   44141 ssh_runner.go:195] Run: cat /version.json
	I0906 19:31:50.042661   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:31:50.045139   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.045481   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.045521   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:50.045541   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.045693   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:31:50.045839   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:50.045999   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:50.046023   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:31:50.046025   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:50.046159   44141 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640/id_rsa Username:docker}
	I0906 19:31:50.046173   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:31:50.046325   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:31:50.046478   44141 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:31:50.046650   44141 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640/id_rsa Username:docker}
	I0906 19:31:50.152801   44141 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0906 19:31:50.152889   44141 command_runner.go:130] > {"iso_version": "v1.34.0", "kicbase_version": "v0.0.44-1724862063-19530", "minikube_version": "v1.34.0", "commit": "613a681f9f90c87e637792fcb55bc4d32fe5c29c"}
	I0906 19:31:50.153018   44141 ssh_runner.go:195] Run: systemctl --version
	I0906 19:31:50.159012   44141 command_runner.go:130] > systemd 252 (252)
	I0906 19:31:50.159053   44141 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0906 19:31:50.159124   44141 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:31:50.322904   44141 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 19:31:50.328799   44141 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0906 19:31:50.328845   44141 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:31:50.328916   44141 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:31:50.338105   44141 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 19:31:50.338130   44141 start.go:495] detecting cgroup driver to use...
	I0906 19:31:50.338180   44141 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:31:50.354405   44141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:31:50.368385   44141 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:31:50.368457   44141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:31:50.382453   44141 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:31:50.397064   44141 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:31:50.561682   44141 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:31:50.706749   44141 docker.go:233] disabling docker service ...
	I0906 19:31:50.706821   44141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:31:50.723368   44141 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:31:50.736800   44141 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:31:50.872096   44141 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:31:51.009108   44141 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:31:51.022542   44141 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:31:51.041233   44141 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0906 19:31:51.041267   44141 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 19:31:51.041306   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.051541   44141 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:31:51.051602   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.062095   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.071925   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.081827   44141 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:31:51.091955   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.101991   44141 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.113254   44141 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:31:51.123919   44141 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:31:51.133077   44141 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0906 19:31:51.133142   44141 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:31:51.142166   44141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:31:51.281135   44141 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:31:58.670116   44141 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.388943235s)
	I0906 19:31:58.670154   44141 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:31:58.670207   44141 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:31:58.675579   44141 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0906 19:31:58.675600   44141 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0906 19:31:58.675607   44141 command_runner.go:130] > Device: 0,22	Inode: 1322        Links: 1
	I0906 19:31:58.675615   44141 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 19:31:58.675623   44141 command_runner.go:130] > Access: 2024-09-06 19:31:58.533357989 +0000
	I0906 19:31:58.675642   44141 command_runner.go:130] > Modify: 2024-09-06 19:31:58.533357989 +0000
	I0906 19:31:58.675650   44141 command_runner.go:130] > Change: 2024-09-06 19:31:58.533357989 +0000
	I0906 19:31:58.675659   44141 command_runner.go:130] >  Birth: -
	I0906 19:31:58.675735   44141 start.go:563] Will wait 60s for crictl version
	I0906 19:31:58.675780   44141 ssh_runner.go:195] Run: which crictl
	I0906 19:31:58.679396   44141 command_runner.go:130] > /usr/bin/crictl
	I0906 19:31:58.679530   44141 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:31:58.714616   44141 command_runner.go:130] > Version:  0.1.0
	I0906 19:31:58.714643   44141 command_runner.go:130] > RuntimeName:  cri-o
	I0906 19:31:58.714647   44141 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0906 19:31:58.714653   44141 command_runner.go:130] > RuntimeApiVersion:  v1
	I0906 19:31:58.714669   44141 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:31:58.714742   44141 ssh_runner.go:195] Run: crio --version
	I0906 19:31:58.743616   44141 command_runner.go:130] > crio version 1.29.1
	I0906 19:31:58.743640   44141 command_runner.go:130] > Version:        1.29.1
	I0906 19:31:58.743648   44141 command_runner.go:130] > GitCommit:      unknown
	I0906 19:31:58.743653   44141 command_runner.go:130] > GitCommitDate:  unknown
	I0906 19:31:58.743658   44141 command_runner.go:130] > GitTreeState:   clean
	I0906 19:31:58.743666   44141 command_runner.go:130] > BuildDate:      2024-09-03T22:31:57Z
	I0906 19:31:58.743671   44141 command_runner.go:130] > GoVersion:      go1.21.6
	I0906 19:31:58.743677   44141 command_runner.go:130] > Compiler:       gc
	I0906 19:31:58.743684   44141 command_runner.go:130] > Platform:       linux/amd64
	I0906 19:31:58.743694   44141 command_runner.go:130] > Linkmode:       dynamic
	I0906 19:31:58.743701   44141 command_runner.go:130] > BuildTags:      
	I0906 19:31:58.743707   44141 command_runner.go:130] >   containers_image_ostree_stub
	I0906 19:31:58.743713   44141 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0906 19:31:58.743720   44141 command_runner.go:130] >   btrfs_noversion
	I0906 19:31:58.743731   44141 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0906 19:31:58.743739   44141 command_runner.go:130] >   libdm_no_deferred_remove
	I0906 19:31:58.743764   44141 command_runner.go:130] >   seccomp
	I0906 19:31:58.743774   44141 command_runner.go:130] > LDFlags:          unknown
	I0906 19:31:58.743780   44141 command_runner.go:130] > SeccompEnabled:   true
	I0906 19:31:58.743786   44141 command_runner.go:130] > AppArmorEnabled:  false
	I0906 19:31:58.743851   44141 ssh_runner.go:195] Run: crio --version
	I0906 19:31:58.770470   44141 command_runner.go:130] > crio version 1.29.1
	I0906 19:31:58.770493   44141 command_runner.go:130] > Version:        1.29.1
	I0906 19:31:58.770519   44141 command_runner.go:130] > GitCommit:      unknown
	I0906 19:31:58.770525   44141 command_runner.go:130] > GitCommitDate:  unknown
	I0906 19:31:58.770530   44141 command_runner.go:130] > GitTreeState:   clean
	I0906 19:31:58.770538   44141 command_runner.go:130] > BuildDate:      2024-09-03T22:31:57Z
	I0906 19:31:58.770544   44141 command_runner.go:130] > GoVersion:      go1.21.6
	I0906 19:31:58.770550   44141 command_runner.go:130] > Compiler:       gc
	I0906 19:31:58.770557   44141 command_runner.go:130] > Platform:       linux/amd64
	I0906 19:31:58.770564   44141 command_runner.go:130] > Linkmode:       dynamic
	I0906 19:31:58.770586   44141 command_runner.go:130] > BuildTags:      
	I0906 19:31:58.770596   44141 command_runner.go:130] >   containers_image_ostree_stub
	I0906 19:31:58.770603   44141 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0906 19:31:58.770610   44141 command_runner.go:130] >   btrfs_noversion
	I0906 19:31:58.770620   44141 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0906 19:31:58.770627   44141 command_runner.go:130] >   libdm_no_deferred_remove
	I0906 19:31:58.770634   44141 command_runner.go:130] >   seccomp
	I0906 19:31:58.770641   44141 command_runner.go:130] > LDFlags:          unknown
	I0906 19:31:58.770649   44141 command_runner.go:130] > SeccompEnabled:   true
	I0906 19:31:58.770658   44141 command_runner.go:130] > AppArmorEnabled:  false
	I0906 19:31:58.773448   44141 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 19:31:58.774493   44141 main.go:141] libmachine: (multinode-002640) Calling .GetIP
	I0906 19:31:58.777060   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:58.777350   44141 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:31:58.777375   44141 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:31:58.777577   44141 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 19:31:58.781689   44141 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0906 19:31:58.781889   44141 kubeadm.go:883] updating cluster {Name:multinode-002640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:31:58.782026   44141 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:31:58.782064   44141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:31:58.828445   44141 command_runner.go:130] > {
	I0906 19:31:58.828465   44141 command_runner.go:130] >   "images": [
	I0906 19:31:58.828470   44141 command_runner.go:130] >     {
	I0906 19:31:58.828477   44141 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0906 19:31:58.828481   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828486   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0906 19:31:58.828490   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828494   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828510   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0906 19:31:58.828516   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0906 19:31:58.828520   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828524   44141 command_runner.go:130] >       "size": "87165492",
	I0906 19:31:58.828528   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828532   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828538   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828542   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828545   44141 command_runner.go:130] >     },
	I0906 19:31:58.828555   44141 command_runner.go:130] >     {
	I0906 19:31:58.828561   44141 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0906 19:31:58.828565   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828570   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0906 19:31:58.828574   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828578   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828585   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0906 19:31:58.828595   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0906 19:31:58.828599   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828603   44141 command_runner.go:130] >       "size": "87190579",
	I0906 19:31:58.828607   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828616   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828620   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828624   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828627   44141 command_runner.go:130] >     },
	I0906 19:31:58.828631   44141 command_runner.go:130] >     {
	I0906 19:31:58.828636   44141 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0906 19:31:58.828641   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828645   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0906 19:31:58.828649   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828653   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828663   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0906 19:31:58.828669   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0906 19:31:58.828675   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828680   44141 command_runner.go:130] >       "size": "1363676",
	I0906 19:31:58.828683   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828688   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828691   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828695   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828699   44141 command_runner.go:130] >     },
	I0906 19:31:58.828702   44141 command_runner.go:130] >     {
	I0906 19:31:58.828708   44141 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0906 19:31:58.828713   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828717   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0906 19:31:58.828721   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828727   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828738   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0906 19:31:58.828754   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0906 19:31:58.828761   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828765   44141 command_runner.go:130] >       "size": "31470524",
	I0906 19:31:58.828769   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828773   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828778   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828784   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828787   44141 command_runner.go:130] >     },
	I0906 19:31:58.828790   44141 command_runner.go:130] >     {
	I0906 19:31:58.828797   44141 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0906 19:31:58.828801   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828806   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0906 19:31:58.828812   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828816   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828825   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0906 19:31:58.828832   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0906 19:31:58.828838   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828842   44141 command_runner.go:130] >       "size": "61245718",
	I0906 19:31:58.828845   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.828850   44141 command_runner.go:130] >       "username": "nonroot",
	I0906 19:31:58.828865   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828870   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828875   44141 command_runner.go:130] >     },
	I0906 19:31:58.828883   44141 command_runner.go:130] >     {
	I0906 19:31:58.828890   44141 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0906 19:31:58.828899   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.828906   44141 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0906 19:31:58.828915   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828920   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.828929   44141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0906 19:31:58.828935   44141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0906 19:31:58.828941   44141 command_runner.go:130] >       ],
	I0906 19:31:58.828945   44141 command_runner.go:130] >       "size": "149009664",
	I0906 19:31:58.828957   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.828964   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.828972   44141 command_runner.go:130] >       },
	I0906 19:31:58.828979   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.828988   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.828995   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.828998   44141 command_runner.go:130] >     },
	I0906 19:31:58.829001   44141 command_runner.go:130] >     {
	I0906 19:31:58.829007   44141 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0906 19:31:58.829013   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829018   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0906 19:31:58.829021   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829025   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829032   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0906 19:31:58.829041   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0906 19:31:58.829046   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829050   44141 command_runner.go:130] >       "size": "95233506",
	I0906 19:31:58.829056   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.829059   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.829063   44141 command_runner.go:130] >       },
	I0906 19:31:58.829067   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829073   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829076   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.829079   44141 command_runner.go:130] >     },
	I0906 19:31:58.829083   44141 command_runner.go:130] >     {
	I0906 19:31:58.829091   44141 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0906 19:31:58.829094   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829099   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0906 19:31:58.829105   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829108   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829129   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0906 19:31:58.829142   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0906 19:31:58.829145   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829149   44141 command_runner.go:130] >       "size": "89437512",
	I0906 19:31:58.829152   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.829156   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.829159   44141 command_runner.go:130] >       },
	I0906 19:31:58.829163   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829177   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829183   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.829186   44141 command_runner.go:130] >     },
	I0906 19:31:58.829189   44141 command_runner.go:130] >     {
	I0906 19:31:58.829195   44141 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0906 19:31:58.829199   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829203   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0906 19:31:58.829206   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829210   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829217   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0906 19:31:58.829223   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0906 19:31:58.829226   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829230   44141 command_runner.go:130] >       "size": "92728217",
	I0906 19:31:58.829234   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.829238   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829241   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829245   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.829249   44141 command_runner.go:130] >     },
	I0906 19:31:58.829253   44141 command_runner.go:130] >     {
	I0906 19:31:58.829262   44141 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0906 19:31:58.829267   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829274   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0906 19:31:58.829277   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829281   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829288   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0906 19:31:58.829297   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0906 19:31:58.829301   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829305   44141 command_runner.go:130] >       "size": "68420936",
	I0906 19:31:58.829311   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.829315   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.829318   44141 command_runner.go:130] >       },
	I0906 19:31:58.829322   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829326   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829330   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.829333   44141 command_runner.go:130] >     },
	I0906 19:31:58.829336   44141 command_runner.go:130] >     {
	I0906 19:31:58.829346   44141 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0906 19:31:58.829352   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.829357   44141 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0906 19:31:58.829360   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829363   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.829373   44141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0906 19:31:58.829382   44141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0906 19:31:58.829385   44141 command_runner.go:130] >       ],
	I0906 19:31:58.829389   44141 command_runner.go:130] >       "size": "742080",
	I0906 19:31:58.829393   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.829397   44141 command_runner.go:130] >         "value": "65535"
	I0906 19:31:58.829403   44141 command_runner.go:130] >       },
	I0906 19:31:58.829407   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.829413   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.829417   44141 command_runner.go:130] >       "pinned": true
	I0906 19:31:58.829420   44141 command_runner.go:130] >     }
	I0906 19:31:58.829423   44141 command_runner.go:130] >   ]
	I0906 19:31:58.829426   44141 command_runner.go:130] > }
	I0906 19:31:58.830439   44141 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:31:58.830451   44141 crio.go:433] Images already preloaded, skipping extraction
	I0906 19:31:58.830490   44141 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:31:58.861960   44141 command_runner.go:130] > {
	I0906 19:31:58.861979   44141 command_runner.go:130] >   "images": [
	I0906 19:31:58.861983   44141 command_runner.go:130] >     {
	I0906 19:31:58.861991   44141 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0906 19:31:58.861995   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862001   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0906 19:31:58.862004   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862008   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862019   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0906 19:31:58.862026   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0906 19:31:58.862030   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862034   44141 command_runner.go:130] >       "size": "87165492",
	I0906 19:31:58.862038   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862042   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862046   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862051   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862054   44141 command_runner.go:130] >     },
	I0906 19:31:58.862057   44141 command_runner.go:130] >     {
	I0906 19:31:58.862063   44141 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0906 19:31:58.862067   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862072   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0906 19:31:58.862079   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862082   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862090   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0906 19:31:58.862097   44141 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0906 19:31:58.862106   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862111   44141 command_runner.go:130] >       "size": "87190579",
	I0906 19:31:58.862115   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862123   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862129   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862133   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862136   44141 command_runner.go:130] >     },
	I0906 19:31:58.862140   44141 command_runner.go:130] >     {
	I0906 19:31:58.862146   44141 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0906 19:31:58.862152   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862157   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0906 19:31:58.862161   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862164   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862171   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0906 19:31:58.862179   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0906 19:31:58.862182   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862187   44141 command_runner.go:130] >       "size": "1363676",
	I0906 19:31:58.862199   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862206   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862210   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862213   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862216   44141 command_runner.go:130] >     },
	I0906 19:31:58.862220   44141 command_runner.go:130] >     {
	I0906 19:31:58.862225   44141 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0906 19:31:58.862229   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862234   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0906 19:31:58.862240   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862244   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862252   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0906 19:31:58.862268   44141 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0906 19:31:58.862273   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862277   44141 command_runner.go:130] >       "size": "31470524",
	I0906 19:31:58.862281   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862284   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862288   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862292   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862299   44141 command_runner.go:130] >     },
	I0906 19:31:58.862303   44141 command_runner.go:130] >     {
	I0906 19:31:58.862309   44141 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0906 19:31:58.862315   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862320   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0906 19:31:58.862323   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862329   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862337   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0906 19:31:58.862352   44141 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0906 19:31:58.862357   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862361   44141 command_runner.go:130] >       "size": "61245718",
	I0906 19:31:58.862365   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862369   44141 command_runner.go:130] >       "username": "nonroot",
	I0906 19:31:58.862373   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862377   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862380   44141 command_runner.go:130] >     },
	I0906 19:31:58.862384   44141 command_runner.go:130] >     {
	I0906 19:31:58.862390   44141 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0906 19:31:58.862396   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862401   44141 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0906 19:31:58.862406   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862410   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862417   44141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0906 19:31:58.862426   44141 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0906 19:31:58.862429   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862434   44141 command_runner.go:130] >       "size": "149009664",
	I0906 19:31:58.862439   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862443   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.862448   44141 command_runner.go:130] >       },
	I0906 19:31:58.862452   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862456   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862462   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862465   44141 command_runner.go:130] >     },
	I0906 19:31:58.862468   44141 command_runner.go:130] >     {
	I0906 19:31:58.862474   44141 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0906 19:31:58.862480   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862491   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0906 19:31:58.862497   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862508   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862517   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0906 19:31:58.862525   44141 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0906 19:31:58.862528   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862532   44141 command_runner.go:130] >       "size": "95233506",
	I0906 19:31:58.862535   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862540   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.862549   44141 command_runner.go:130] >       },
	I0906 19:31:58.862553   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862556   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862560   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862563   44141 command_runner.go:130] >     },
	I0906 19:31:58.862567   44141 command_runner.go:130] >     {
	I0906 19:31:58.862572   44141 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0906 19:31:58.862578   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862583   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0906 19:31:58.862589   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862593   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862613   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0906 19:31:58.862623   44141 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0906 19:31:58.862627   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862630   44141 command_runner.go:130] >       "size": "89437512",
	I0906 19:31:58.862634   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862638   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.862641   44141 command_runner.go:130] >       },
	I0906 19:31:58.862645   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862650   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862654   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862660   44141 command_runner.go:130] >     },
	I0906 19:31:58.862663   44141 command_runner.go:130] >     {
	I0906 19:31:58.862669   44141 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0906 19:31:58.862674   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862683   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0906 19:31:58.862689   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862697   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862706   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0906 19:31:58.862713   44141 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0906 19:31:58.862718   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862723   44141 command_runner.go:130] >       "size": "92728217",
	I0906 19:31:58.862727   44141 command_runner.go:130] >       "uid": null,
	I0906 19:31:58.862731   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862735   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862738   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862742   44141 command_runner.go:130] >     },
	I0906 19:31:58.862745   44141 command_runner.go:130] >     {
	I0906 19:31:58.862754   44141 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0906 19:31:58.862760   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862765   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0906 19:31:58.862771   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862774   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862782   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0906 19:31:58.862793   44141 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0906 19:31:58.862798   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862801   44141 command_runner.go:130] >       "size": "68420936",
	I0906 19:31:58.862806   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862810   44141 command_runner.go:130] >         "value": "0"
	I0906 19:31:58.862813   44141 command_runner.go:130] >       },
	I0906 19:31:58.862817   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862821   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862825   44141 command_runner.go:130] >       "pinned": false
	I0906 19:31:58.862829   44141 command_runner.go:130] >     },
	I0906 19:31:58.862832   44141 command_runner.go:130] >     {
	I0906 19:31:58.862838   44141 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0906 19:31:58.862845   44141 command_runner.go:130] >       "repoTags": [
	I0906 19:31:58.862849   44141 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0906 19:31:58.862852   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862856   44141 command_runner.go:130] >       "repoDigests": [
	I0906 19:31:58.862863   44141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0906 19:31:58.862871   44141 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0906 19:31:58.862875   44141 command_runner.go:130] >       ],
	I0906 19:31:58.862885   44141 command_runner.go:130] >       "size": "742080",
	I0906 19:31:58.862889   44141 command_runner.go:130] >       "uid": {
	I0906 19:31:58.862893   44141 command_runner.go:130] >         "value": "65535"
	I0906 19:31:58.862899   44141 command_runner.go:130] >       },
	I0906 19:31:58.862903   44141 command_runner.go:130] >       "username": "",
	I0906 19:31:58.862906   44141 command_runner.go:130] >       "spec": null,
	I0906 19:31:58.862910   44141 command_runner.go:130] >       "pinned": true
	I0906 19:31:58.862913   44141 command_runner.go:130] >     }
	I0906 19:31:58.862916   44141 command_runner.go:130] >   ]
	I0906 19:31:58.862922   44141 command_runner.go:130] > }
	I0906 19:31:58.863521   44141 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:31:58.863538   44141 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:31:58.863546   44141 kubeadm.go:934] updating node { 192.168.39.11 8443 v1.31.0 crio true true} ...
	I0906 19:31:58.863631   44141 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-002640 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:31:58.863688   44141 ssh_runner.go:195] Run: crio config
	I0906 19:31:58.895019   44141 command_runner.go:130] ! time="2024-09-06 19:31:58.864354224Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0906 19:31:58.901930   44141 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0906 19:31:58.908688   44141 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0906 19:31:58.908713   44141 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0906 19:31:58.908720   44141 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0906 19:31:58.908723   44141 command_runner.go:130] > #
	I0906 19:31:58.908730   44141 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0906 19:31:58.908736   44141 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0906 19:31:58.908742   44141 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0906 19:31:58.908751   44141 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0906 19:31:58.908756   44141 command_runner.go:130] > # reload'.
	I0906 19:31:58.908765   44141 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0906 19:31:58.908779   44141 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0906 19:31:58.908795   44141 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0906 19:31:58.908804   44141 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0906 19:31:58.908810   44141 command_runner.go:130] > [crio]
	I0906 19:31:58.908816   44141 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0906 19:31:58.908820   44141 command_runner.go:130] > # containers images, in this directory.
	I0906 19:31:58.908824   44141 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0906 19:31:58.908834   44141 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0906 19:31:58.908844   44141 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0906 19:31:58.908852   44141 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0906 19:31:58.908869   44141 command_runner.go:130] > # imagestore = ""
	I0906 19:31:58.908880   44141 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0906 19:31:58.908892   44141 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0906 19:31:58.908899   44141 command_runner.go:130] > storage_driver = "overlay"
	I0906 19:31:58.908906   44141 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0906 19:31:58.908914   44141 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0906 19:31:58.908919   44141 command_runner.go:130] > storage_option = [
	I0906 19:31:58.908923   44141 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0906 19:31:58.908935   44141 command_runner.go:130] > ]
	I0906 19:31:58.908944   44141 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0906 19:31:58.908950   44141 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0906 19:31:58.908957   44141 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0906 19:31:58.908962   44141 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0906 19:31:58.908968   44141 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0906 19:31:58.908975   44141 command_runner.go:130] > # always happen on a node reboot
	I0906 19:31:58.908979   44141 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0906 19:31:58.908996   44141 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0906 19:31:58.909004   44141 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0906 19:31:58.909009   44141 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0906 19:31:58.909014   44141 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0906 19:31:58.909020   44141 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0906 19:31:58.909028   44141 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0906 19:31:58.909033   44141 command_runner.go:130] > # internal_wipe = true
	I0906 19:31:58.909040   44141 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0906 19:31:58.909046   44141 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0906 19:31:58.909052   44141 command_runner.go:130] > # internal_repair = false
	I0906 19:31:58.909057   44141 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0906 19:31:58.909062   44141 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0906 19:31:58.909068   44141 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0906 19:31:58.909072   44141 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0906 19:31:58.909078   44141 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0906 19:31:58.909084   44141 command_runner.go:130] > [crio.api]
	I0906 19:31:58.909090   44141 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0906 19:31:58.909096   44141 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0906 19:31:58.909101   44141 command_runner.go:130] > # IP address on which the stream server will listen.
	I0906 19:31:58.909107   44141 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0906 19:31:58.909113   44141 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0906 19:31:58.909136   44141 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0906 19:31:58.909142   44141 command_runner.go:130] > # stream_port = "0"
	I0906 19:31:58.909148   44141 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0906 19:31:58.909152   44141 command_runner.go:130] > # stream_enable_tls = false
	I0906 19:31:58.909158   44141 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0906 19:31:58.909162   44141 command_runner.go:130] > # stream_idle_timeout = ""
	I0906 19:31:58.909168   44141 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0906 19:31:58.909181   44141 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0906 19:31:58.909187   44141 command_runner.go:130] > # minutes.
	I0906 19:31:58.909191   44141 command_runner.go:130] > # stream_tls_cert = ""
	I0906 19:31:58.909197   44141 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0906 19:31:58.909205   44141 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0906 19:31:58.909209   44141 command_runner.go:130] > # stream_tls_key = ""
	I0906 19:31:58.909215   44141 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0906 19:31:58.909221   44141 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0906 19:31:58.909242   44141 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0906 19:31:58.909248   44141 command_runner.go:130] > # stream_tls_ca = ""
	I0906 19:31:58.909255   44141 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0906 19:31:58.909262   44141 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0906 19:31:58.909269   44141 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0906 19:31:58.909273   44141 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0906 19:31:58.909279   44141 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0906 19:31:58.909286   44141 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0906 19:31:58.909290   44141 command_runner.go:130] > [crio.runtime]
	I0906 19:31:58.909295   44141 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0906 19:31:58.909304   44141 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0906 19:31:58.909308   44141 command_runner.go:130] > # "nofile=1024:2048"
	I0906 19:31:58.909314   44141 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0906 19:31:58.909319   44141 command_runner.go:130] > # default_ulimits = [
	I0906 19:31:58.909323   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909329   44141 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0906 19:31:58.909334   44141 command_runner.go:130] > # no_pivot = false
	I0906 19:31:58.909340   44141 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0906 19:31:58.909346   44141 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0906 19:31:58.909353   44141 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0906 19:31:58.909358   44141 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0906 19:31:58.909363   44141 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0906 19:31:58.909369   44141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0906 19:31:58.909375   44141 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0906 19:31:58.909379   44141 command_runner.go:130] > # Cgroup setting for conmon
	I0906 19:31:58.909386   44141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0906 19:31:58.909392   44141 command_runner.go:130] > conmon_cgroup = "pod"
	I0906 19:31:58.909398   44141 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0906 19:31:58.909409   44141 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0906 19:31:58.909418   44141 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0906 19:31:58.909422   44141 command_runner.go:130] > conmon_env = [
	I0906 19:31:58.909430   44141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0906 19:31:58.909433   44141 command_runner.go:130] > ]
	I0906 19:31:58.909438   44141 command_runner.go:130] > # Additional environment variables to set for all the
	I0906 19:31:58.909445   44141 command_runner.go:130] > # containers. These are overridden if set in the
	I0906 19:31:58.909450   44141 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0906 19:31:58.909455   44141 command_runner.go:130] > # default_env = [
	I0906 19:31:58.909460   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909465   44141 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0906 19:31:58.909472   44141 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0906 19:31:58.909478   44141 command_runner.go:130] > # selinux = false
	I0906 19:31:58.909484   44141 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0906 19:31:58.909490   44141 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0906 19:31:58.909496   44141 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0906 19:31:58.909506   44141 command_runner.go:130] > # seccomp_profile = ""
	I0906 19:31:58.909511   44141 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0906 19:31:58.909518   44141 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0906 19:31:58.909524   44141 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0906 19:31:58.909531   44141 command_runner.go:130] > # which might increase security.
	I0906 19:31:58.909536   44141 command_runner.go:130] > # This option is currently deprecated,
	I0906 19:31:58.909541   44141 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0906 19:31:58.909547   44141 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0906 19:31:58.909553   44141 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0906 19:31:58.909559   44141 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0906 19:31:58.909567   44141 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0906 19:31:58.909573   44141 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0906 19:31:58.909580   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.909585   44141 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0906 19:31:58.909590   44141 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0906 19:31:58.909597   44141 command_runner.go:130] > # the cgroup blockio controller.
	I0906 19:31:58.909601   44141 command_runner.go:130] > # blockio_config_file = ""
	I0906 19:31:58.909607   44141 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0906 19:31:58.909612   44141 command_runner.go:130] > # blockio parameters.
	I0906 19:31:58.909616   44141 command_runner.go:130] > # blockio_reload = false
	I0906 19:31:58.909626   44141 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0906 19:31:58.909632   44141 command_runner.go:130] > # irqbalance daemon.
	I0906 19:31:58.909638   44141 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0906 19:31:58.909644   44141 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0906 19:31:58.909652   44141 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0906 19:31:58.909658   44141 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0906 19:31:58.909664   44141 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0906 19:31:58.909670   44141 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0906 19:31:58.909678   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.909682   44141 command_runner.go:130] > # rdt_config_file = ""
	I0906 19:31:58.909688   44141 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0906 19:31:58.909694   44141 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0906 19:31:58.909723   44141 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0906 19:31:58.909729   44141 command_runner.go:130] > # separate_pull_cgroup = ""
	I0906 19:31:58.909735   44141 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0906 19:31:58.909743   44141 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0906 19:31:58.909747   44141 command_runner.go:130] > # will be added.
	I0906 19:31:58.909750   44141 command_runner.go:130] > # default_capabilities = [
	I0906 19:31:58.909754   44141 command_runner.go:130] > # 	"CHOWN",
	I0906 19:31:58.909758   44141 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0906 19:31:58.909761   44141 command_runner.go:130] > # 	"FSETID",
	I0906 19:31:58.909765   44141 command_runner.go:130] > # 	"FOWNER",
	I0906 19:31:58.909768   44141 command_runner.go:130] > # 	"SETGID",
	I0906 19:31:58.909772   44141 command_runner.go:130] > # 	"SETUID",
	I0906 19:31:58.909778   44141 command_runner.go:130] > # 	"SETPCAP",
	I0906 19:31:58.909782   44141 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0906 19:31:58.909785   44141 command_runner.go:130] > # 	"KILL",
	I0906 19:31:58.909789   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909798   44141 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0906 19:31:58.909804   44141 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0906 19:31:58.909808   44141 command_runner.go:130] > # add_inheritable_capabilities = false
	I0906 19:31:58.909814   44141 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0906 19:31:58.909822   44141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0906 19:31:58.909826   44141 command_runner.go:130] > default_sysctls = [
	I0906 19:31:58.909830   44141 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0906 19:31:58.909836   44141 command_runner.go:130] > ]
	I0906 19:31:58.909845   44141 command_runner.go:130] > # List of devices on the host that a
	I0906 19:31:58.909853   44141 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0906 19:31:58.909857   44141 command_runner.go:130] > # allowed_devices = [
	I0906 19:31:58.909862   44141 command_runner.go:130] > # 	"/dev/fuse",
	I0906 19:31:58.909871   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909878   44141 command_runner.go:130] > # List of additional devices. specified as
	I0906 19:31:58.909885   44141 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0906 19:31:58.909892   44141 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0906 19:31:58.909898   44141 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0906 19:31:58.909903   44141 command_runner.go:130] > # additional_devices = [
	I0906 19:31:58.909908   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909913   44141 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0906 19:31:58.909919   44141 command_runner.go:130] > # cdi_spec_dirs = [
	I0906 19:31:58.909923   44141 command_runner.go:130] > # 	"/etc/cdi",
	I0906 19:31:58.909928   44141 command_runner.go:130] > # 	"/var/run/cdi",
	I0906 19:31:58.909932   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909939   44141 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0906 19:31:58.909947   44141 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0906 19:31:58.909951   44141 command_runner.go:130] > # Defaults to false.
	I0906 19:31:58.909956   44141 command_runner.go:130] > # device_ownership_from_security_context = false
	I0906 19:31:58.909962   44141 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0906 19:31:58.909970   44141 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0906 19:31:58.909974   44141 command_runner.go:130] > # hooks_dir = [
	I0906 19:31:58.909980   44141 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0906 19:31:58.909984   44141 command_runner.go:130] > # ]
	I0906 19:31:58.909992   44141 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0906 19:31:58.909998   44141 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0906 19:31:58.910005   44141 command_runner.go:130] > # its default mounts from the following two files:
	I0906 19:31:58.910008   44141 command_runner.go:130] > #
	I0906 19:31:58.910016   44141 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0906 19:31:58.910023   44141 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0906 19:31:58.910030   44141 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0906 19:31:58.910033   44141 command_runner.go:130] > #
	I0906 19:31:58.910039   44141 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0906 19:31:58.910047   44141 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0906 19:31:58.910053   44141 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0906 19:31:58.910065   44141 command_runner.go:130] > #      only add mounts it finds in this file.
	I0906 19:31:58.910070   44141 command_runner.go:130] > #
	I0906 19:31:58.910074   44141 command_runner.go:130] > # default_mounts_file = ""
	I0906 19:31:58.910079   44141 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0906 19:31:58.910086   44141 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0906 19:31:58.910092   44141 command_runner.go:130] > pids_limit = 1024
	I0906 19:31:58.910098   44141 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0906 19:31:58.910105   44141 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0906 19:31:58.910111   44141 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0906 19:31:58.910120   44141 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0906 19:31:58.910126   44141 command_runner.go:130] > # log_size_max = -1
	I0906 19:31:58.910135   44141 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0906 19:31:58.910141   44141 command_runner.go:130] > # log_to_journald = false
	I0906 19:31:58.910149   44141 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0906 19:31:58.910156   44141 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0906 19:31:58.910161   44141 command_runner.go:130] > # Path to directory for container attach sockets.
	I0906 19:31:58.910168   44141 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0906 19:31:58.910173   44141 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0906 19:31:58.910180   44141 command_runner.go:130] > # bind_mount_prefix = ""
	I0906 19:31:58.910185   44141 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0906 19:31:58.910191   44141 command_runner.go:130] > # read_only = false
	I0906 19:31:58.910196   44141 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0906 19:31:58.910204   44141 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0906 19:31:58.910208   44141 command_runner.go:130] > # live configuration reload.
	I0906 19:31:58.910213   44141 command_runner.go:130] > # log_level = "info"
	I0906 19:31:58.910218   44141 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0906 19:31:58.910226   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.910232   44141 command_runner.go:130] > # log_filter = ""
	I0906 19:31:58.910238   44141 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0906 19:31:58.910247   44141 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0906 19:31:58.910251   44141 command_runner.go:130] > # separated by comma.
	I0906 19:31:58.910258   44141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0906 19:31:58.910265   44141 command_runner.go:130] > # uid_mappings = ""
	I0906 19:31:58.910270   44141 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0906 19:31:58.910278   44141 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0906 19:31:58.910282   44141 command_runner.go:130] > # separated by comma.
	I0906 19:31:58.910297   44141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0906 19:31:58.910303   44141 command_runner.go:130] > # gid_mappings = ""
	I0906 19:31:58.910309   44141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0906 19:31:58.910315   44141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0906 19:31:58.910322   44141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0906 19:31:58.910329   44141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0906 19:31:58.910336   44141 command_runner.go:130] > # minimum_mappable_uid = -1
	I0906 19:31:58.910342   44141 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0906 19:31:58.910350   44141 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0906 19:31:58.910356   44141 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0906 19:31:58.910364   44141 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0906 19:31:58.910369   44141 command_runner.go:130] > # minimum_mappable_gid = -1
	I0906 19:31:58.910375   44141 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0906 19:31:58.910383   44141 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0906 19:31:58.910388   44141 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0906 19:31:58.910395   44141 command_runner.go:130] > # ctr_stop_timeout = 30
	I0906 19:31:58.910400   44141 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0906 19:31:58.910407   44141 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0906 19:31:58.910412   44141 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0906 19:31:58.910419   44141 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0906 19:31:58.910423   44141 command_runner.go:130] > drop_infra_ctr = false
	I0906 19:31:58.910431   44141 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0906 19:31:58.910437   44141 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0906 19:31:58.910445   44141 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0906 19:31:58.910450   44141 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0906 19:31:58.910457   44141 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0906 19:31:58.910469   44141 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0906 19:31:58.910477   44141 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0906 19:31:58.910482   44141 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0906 19:31:58.910488   44141 command_runner.go:130] > # shared_cpuset = ""
	I0906 19:31:58.910494   44141 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0906 19:31:58.910505   44141 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0906 19:31:58.910511   44141 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0906 19:31:58.910517   44141 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0906 19:31:58.910524   44141 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0906 19:31:58.910529   44141 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0906 19:31:58.910542   44141 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0906 19:31:58.910548   44141 command_runner.go:130] > # enable_criu_support = false
	I0906 19:31:58.910553   44141 command_runner.go:130] > # Enable/disable the generation of the container,
	I0906 19:31:58.910561   44141 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0906 19:31:58.910565   44141 command_runner.go:130] > # enable_pod_events = false
	I0906 19:31:58.910573   44141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0906 19:31:58.910579   44141 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0906 19:31:58.910586   44141 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0906 19:31:58.910590   44141 command_runner.go:130] > # default_runtime = "runc"
	I0906 19:31:58.910595   44141 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0906 19:31:58.910604   44141 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0906 19:31:58.910613   44141 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0906 19:31:58.910620   44141 command_runner.go:130] > # creation as a file is not desired either.
	I0906 19:31:58.910628   44141 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0906 19:31:58.910635   44141 command_runner.go:130] > # the hostname is being managed dynamically.
	I0906 19:31:58.910640   44141 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0906 19:31:58.910645   44141 command_runner.go:130] > # ]
	I0906 19:31:58.910650   44141 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0906 19:31:58.910658   44141 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0906 19:31:58.910664   44141 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0906 19:31:58.910671   44141 command_runner.go:130] > # Each entry in the table should follow the format:
	I0906 19:31:58.910674   44141 command_runner.go:130] > #
	I0906 19:31:58.910682   44141 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0906 19:31:58.910690   44141 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0906 19:31:58.910773   44141 command_runner.go:130] > # runtime_type = "oci"
	I0906 19:31:58.910786   44141 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0906 19:31:58.910790   44141 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0906 19:31:58.910794   44141 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0906 19:31:58.910799   44141 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0906 19:31:58.910802   44141 command_runner.go:130] > # monitor_env = []
	I0906 19:31:58.910807   44141 command_runner.go:130] > # privileged_without_host_devices = false
	I0906 19:31:58.910813   44141 command_runner.go:130] > # allowed_annotations = []
	I0906 19:31:58.910819   44141 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0906 19:31:58.910824   44141 command_runner.go:130] > # Where:
	I0906 19:31:58.910829   44141 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0906 19:31:58.910837   44141 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0906 19:31:58.910847   44141 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0906 19:31:58.910855   44141 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0906 19:31:58.910859   44141 command_runner.go:130] > #   in $PATH.
	I0906 19:31:58.910867   44141 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0906 19:31:58.910872   44141 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0906 19:31:58.910878   44141 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0906 19:31:58.910882   44141 command_runner.go:130] > #   state.
	I0906 19:31:58.910889   44141 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0906 19:31:58.910897   44141 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0906 19:31:58.910902   44141 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0906 19:31:58.910909   44141 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0906 19:31:58.910915   44141 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0906 19:31:58.910923   44141 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0906 19:31:58.910930   44141 command_runner.go:130] > #   The currently recognized values are:
	I0906 19:31:58.910936   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0906 19:31:58.910945   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0906 19:31:58.910951   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0906 19:31:58.910958   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0906 19:31:58.910965   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0906 19:31:58.910977   44141 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0906 19:31:58.910983   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0906 19:31:58.910991   44141 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0906 19:31:58.910997   44141 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0906 19:31:58.911005   44141 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0906 19:31:58.911009   44141 command_runner.go:130] > #   deprecated option "conmon".
	I0906 19:31:58.911019   44141 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0906 19:31:58.911026   44141 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0906 19:31:58.911032   44141 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0906 19:31:58.911039   44141 command_runner.go:130] > #   should be moved to the container's cgroup
	I0906 19:31:58.911045   44141 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0906 19:31:58.911052   44141 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0906 19:31:58.911058   44141 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0906 19:31:58.911065   44141 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0906 19:31:58.911069   44141 command_runner.go:130] > #
	I0906 19:31:58.911074   44141 command_runner.go:130] > # Using the seccomp notifier feature:
	I0906 19:31:58.911077   44141 command_runner.go:130] > #
	I0906 19:31:58.911088   44141 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0906 19:31:58.911096   44141 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0906 19:31:58.911102   44141 command_runner.go:130] > #
	I0906 19:31:58.911108   44141 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0906 19:31:58.911115   44141 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0906 19:31:58.911118   44141 command_runner.go:130] > #
	I0906 19:31:58.911124   44141 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0906 19:31:58.911130   44141 command_runner.go:130] > # feature.
	I0906 19:31:58.911133   44141 command_runner.go:130] > #
	I0906 19:31:58.911138   44141 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0906 19:31:58.911146   44141 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0906 19:31:58.911152   44141 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0906 19:31:58.911160   44141 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0906 19:31:58.911165   44141 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0906 19:31:58.911169   44141 command_runner.go:130] > #
	I0906 19:31:58.911175   44141 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0906 19:31:58.911183   44141 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0906 19:31:58.911186   44141 command_runner.go:130] > #
	I0906 19:31:58.911195   44141 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0906 19:31:58.911203   44141 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0906 19:31:58.911206   44141 command_runner.go:130] > #
	I0906 19:31:58.911214   44141 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0906 19:31:58.911220   44141 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0906 19:31:58.911223   44141 command_runner.go:130] > # limitation.
	I0906 19:31:58.911231   44141 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0906 19:31:58.911238   44141 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0906 19:31:58.911242   44141 command_runner.go:130] > runtime_type = "oci"
	I0906 19:31:58.911248   44141 command_runner.go:130] > runtime_root = "/run/runc"
	I0906 19:31:58.911252   44141 command_runner.go:130] > runtime_config_path = ""
	I0906 19:31:58.911256   44141 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0906 19:31:58.911263   44141 command_runner.go:130] > monitor_cgroup = "pod"
	I0906 19:31:58.911267   44141 command_runner.go:130] > monitor_exec_cgroup = ""
	I0906 19:31:58.911273   44141 command_runner.go:130] > monitor_env = [
	I0906 19:31:58.911279   44141 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0906 19:31:58.911284   44141 command_runner.go:130] > ]
	I0906 19:31:58.911288   44141 command_runner.go:130] > privileged_without_host_devices = false
	I0906 19:31:58.911301   44141 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0906 19:31:58.911308   44141 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0906 19:31:58.911314   44141 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0906 19:31:58.911323   44141 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0906 19:31:58.911332   44141 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0906 19:31:58.911338   44141 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0906 19:31:58.911347   44141 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0906 19:31:58.911357   44141 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0906 19:31:58.911365   44141 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0906 19:31:58.911372   44141 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0906 19:31:58.911375   44141 command_runner.go:130] > # Example:
	I0906 19:31:58.911379   44141 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0906 19:31:58.911384   44141 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0906 19:31:58.911388   44141 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0906 19:31:58.911393   44141 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0906 19:31:58.911396   44141 command_runner.go:130] > # cpuset = 0
	I0906 19:31:58.911400   44141 command_runner.go:130] > # cpushares = "0-1"
	I0906 19:31:58.911403   44141 command_runner.go:130] > # Where:
	I0906 19:31:58.911407   44141 command_runner.go:130] > # The workload name is workload-type.
	I0906 19:31:58.911414   44141 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0906 19:31:58.911418   44141 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0906 19:31:58.911424   44141 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0906 19:31:58.911431   44141 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0906 19:31:58.911436   44141 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0906 19:31:58.911440   44141 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0906 19:31:58.911446   44141 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0906 19:31:58.911450   44141 command_runner.go:130] > # Default value is set to true
	I0906 19:31:58.911454   44141 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0906 19:31:58.911459   44141 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0906 19:31:58.911463   44141 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0906 19:31:58.911467   44141 command_runner.go:130] > # Default value is set to 'false'
	I0906 19:31:58.911471   44141 command_runner.go:130] > # disable_hostport_mapping = false
	I0906 19:31:58.911477   44141 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0906 19:31:58.911480   44141 command_runner.go:130] > #
	I0906 19:31:58.911485   44141 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0906 19:31:58.911490   44141 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0906 19:31:58.911504   44141 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0906 19:31:58.911510   44141 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0906 19:31:58.911515   44141 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0906 19:31:58.911518   44141 command_runner.go:130] > [crio.image]
	I0906 19:31:58.911528   44141 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0906 19:31:58.911532   44141 command_runner.go:130] > # default_transport = "docker://"
	I0906 19:31:58.911537   44141 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0906 19:31:58.911543   44141 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0906 19:31:58.911547   44141 command_runner.go:130] > # global_auth_file = ""
	I0906 19:31:58.911553   44141 command_runner.go:130] > # The image used to instantiate infra containers.
	I0906 19:31:58.911558   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.911565   44141 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0906 19:31:58.911571   44141 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0906 19:31:58.911579   44141 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0906 19:31:58.911584   44141 command_runner.go:130] > # This option supports live configuration reload.
	I0906 19:31:58.911590   44141 command_runner.go:130] > # pause_image_auth_file = ""
	I0906 19:31:58.911596   44141 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0906 19:31:58.911604   44141 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0906 19:31:58.911611   44141 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0906 19:31:58.911617   44141 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0906 19:31:58.911622   44141 command_runner.go:130] > # pause_command = "/pause"
	I0906 19:31:58.911628   44141 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0906 19:31:58.911635   44141 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0906 19:31:58.911641   44141 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0906 19:31:58.911650   44141 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0906 19:31:58.911658   44141 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0906 19:31:58.911664   44141 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0906 19:31:58.911670   44141 command_runner.go:130] > # pinned_images = [
	I0906 19:31:58.911674   44141 command_runner.go:130] > # ]
	I0906 19:31:58.911682   44141 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0906 19:31:58.911688   44141 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0906 19:31:58.911696   44141 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0906 19:31:58.911702   44141 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0906 19:31:58.911709   44141 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0906 19:31:58.911713   44141 command_runner.go:130] > # signature_policy = ""
	I0906 19:31:58.911721   44141 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0906 19:31:58.911737   44141 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0906 19:31:58.911745   44141 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0906 19:31:58.911751   44141 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0906 19:31:58.911758   44141 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0906 19:31:58.911766   44141 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0906 19:31:58.911774   44141 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0906 19:31:58.911780   44141 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0906 19:31:58.911786   44141 command_runner.go:130] > # changing them here.
	I0906 19:31:58.911790   44141 command_runner.go:130] > # insecure_registries = [
	I0906 19:31:58.911795   44141 command_runner.go:130] > # ]
	I0906 19:31:58.911801   44141 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0906 19:31:58.911808   44141 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0906 19:31:58.911812   44141 command_runner.go:130] > # image_volumes = "mkdir"
	I0906 19:31:58.911819   44141 command_runner.go:130] > # Temporary directory to use for storing big files
	I0906 19:31:58.911824   44141 command_runner.go:130] > # big_files_temporary_dir = ""
	I0906 19:31:58.911832   44141 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0906 19:31:58.911836   44141 command_runner.go:130] > # CNI plugins.
	I0906 19:31:58.911839   44141 command_runner.go:130] > [crio.network]
	I0906 19:31:58.911844   44141 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0906 19:31:58.911852   44141 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0906 19:31:58.911856   44141 command_runner.go:130] > # cni_default_network = ""
	I0906 19:31:58.911863   44141 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0906 19:31:58.911868   44141 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0906 19:31:58.911876   44141 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0906 19:31:58.911880   44141 command_runner.go:130] > # plugin_dirs = [
	I0906 19:31:58.911886   44141 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0906 19:31:58.911889   44141 command_runner.go:130] > # ]
	I0906 19:31:58.911897   44141 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0906 19:31:58.911901   44141 command_runner.go:130] > [crio.metrics]
	I0906 19:31:58.911907   44141 command_runner.go:130] > # Globally enable or disable metrics support.
	I0906 19:31:58.911911   44141 command_runner.go:130] > enable_metrics = true
	I0906 19:31:58.911916   44141 command_runner.go:130] > # Specify enabled metrics collectors.
	I0906 19:31:58.911922   44141 command_runner.go:130] > # Per default all metrics are enabled.
	I0906 19:31:58.911928   44141 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0906 19:31:58.911936   44141 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0906 19:31:58.911942   44141 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0906 19:31:58.911952   44141 command_runner.go:130] > # metrics_collectors = [
	I0906 19:31:58.911958   44141 command_runner.go:130] > # 	"operations",
	I0906 19:31:58.911963   44141 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0906 19:31:58.911969   44141 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0906 19:31:58.911973   44141 command_runner.go:130] > # 	"operations_errors",
	I0906 19:31:58.911980   44141 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0906 19:31:58.911984   44141 command_runner.go:130] > # 	"image_pulls_by_name",
	I0906 19:31:58.911990   44141 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0906 19:31:58.911994   44141 command_runner.go:130] > # 	"image_pulls_failures",
	I0906 19:31:58.912001   44141 command_runner.go:130] > # 	"image_pulls_successes",
	I0906 19:31:58.912005   44141 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0906 19:31:58.912011   44141 command_runner.go:130] > # 	"image_layer_reuse",
	I0906 19:31:58.912015   44141 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0906 19:31:58.912019   44141 command_runner.go:130] > # 	"containers_oom_total",
	I0906 19:31:58.912023   44141 command_runner.go:130] > # 	"containers_oom",
	I0906 19:31:58.912027   44141 command_runner.go:130] > # 	"processes_defunct",
	I0906 19:31:58.912033   44141 command_runner.go:130] > # 	"operations_total",
	I0906 19:31:58.912037   44141 command_runner.go:130] > # 	"operations_latency_seconds",
	I0906 19:31:58.912044   44141 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0906 19:31:58.912048   44141 command_runner.go:130] > # 	"operations_errors_total",
	I0906 19:31:58.912054   44141 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0906 19:31:58.912058   44141 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0906 19:31:58.912065   44141 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0906 19:31:58.912069   44141 command_runner.go:130] > # 	"image_pulls_success_total",
	I0906 19:31:58.912076   44141 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0906 19:31:58.912080   44141 command_runner.go:130] > # 	"containers_oom_count_total",
	I0906 19:31:58.912091   44141 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0906 19:31:58.912098   44141 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0906 19:31:58.912101   44141 command_runner.go:130] > # ]
	I0906 19:31:58.912106   44141 command_runner.go:130] > # The port on which the metrics server will listen.
	I0906 19:31:58.912112   44141 command_runner.go:130] > # metrics_port = 9090
	I0906 19:31:58.912116   44141 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0906 19:31:58.912122   44141 command_runner.go:130] > # metrics_socket = ""
	I0906 19:31:58.912127   44141 command_runner.go:130] > # The certificate for the secure metrics server.
	I0906 19:31:58.912134   44141 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0906 19:31:58.912140   44141 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0906 19:31:58.912151   44141 command_runner.go:130] > # certificate on any modification event.
	I0906 19:31:58.912157   44141 command_runner.go:130] > # metrics_cert = ""
	I0906 19:31:58.912163   44141 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0906 19:31:58.912169   44141 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0906 19:31:58.912173   44141 command_runner.go:130] > # metrics_key = ""
	I0906 19:31:58.912181   44141 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0906 19:31:58.912184   44141 command_runner.go:130] > [crio.tracing]
	I0906 19:31:58.912190   44141 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0906 19:31:58.912195   44141 command_runner.go:130] > # enable_tracing = false
	I0906 19:31:58.912201   44141 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0906 19:31:58.912207   44141 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0906 19:31:58.912214   44141 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0906 19:31:58.912221   44141 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0906 19:31:58.912225   44141 command_runner.go:130] > # CRI-O NRI configuration.
	I0906 19:31:58.912230   44141 command_runner.go:130] > [crio.nri]
	I0906 19:31:58.912234   44141 command_runner.go:130] > # Globally enable or disable NRI.
	I0906 19:31:58.912238   44141 command_runner.go:130] > # enable_nri = false
	I0906 19:31:58.912243   44141 command_runner.go:130] > # NRI socket to listen on.
	I0906 19:31:58.912249   44141 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0906 19:31:58.912254   44141 command_runner.go:130] > # NRI plugin directory to use.
	I0906 19:31:58.912261   44141 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0906 19:31:58.912266   44141 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0906 19:31:58.912273   44141 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0906 19:31:58.912278   44141 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0906 19:31:58.912284   44141 command_runner.go:130] > # nri_disable_connections = false
	I0906 19:31:58.912290   44141 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0906 19:31:58.912296   44141 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0906 19:31:58.912302   44141 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0906 19:31:58.912308   44141 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0906 19:31:58.912314   44141 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0906 19:31:58.912320   44141 command_runner.go:130] > [crio.stats]
	I0906 19:31:58.912325   44141 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0906 19:31:58.912333   44141 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0906 19:31:58.912337   44141 command_runner.go:130] > # stats_collection_period = 0
	I0906 19:31:58.912484   44141 cni.go:84] Creating CNI manager for ""
	I0906 19:31:58.912503   44141 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0906 19:31:58.912518   44141 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:31:58.912540   44141 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-002640 NodeName:multinode-002640 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:31:58.912662   44141 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-002640"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:31:58.912717   44141 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 19:31:58.924570   44141 command_runner.go:130] > kubeadm
	I0906 19:31:58.924592   44141 command_runner.go:130] > kubectl
	I0906 19:31:58.924598   44141 command_runner.go:130] > kubelet
	I0906 19:31:58.924651   44141 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:31:58.924696   44141 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 19:31:58.935587   44141 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0906 19:31:58.955494   44141 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:31:58.973450   44141 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0906 19:31:58.991811   44141 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I0906 19:31:58.995626   44141 command_runner.go:130] > 192.168.39.11	control-plane.minikube.internal
	I0906 19:31:58.995740   44141 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:31:59.142783   44141 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:31:59.156973   44141 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640 for IP: 192.168.39.11
	I0906 19:31:59.156997   44141 certs.go:194] generating shared ca certs ...
	I0906 19:31:59.157011   44141 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:31:59.157165   44141 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:31:59.157208   44141 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:31:59.157218   44141 certs.go:256] generating profile certs ...
	I0906 19:31:59.157286   44141 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/client.key
	I0906 19:31:59.157340   44141 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.key.7a18bd90
	I0906 19:31:59.157375   44141 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.key
	I0906 19:31:59.157383   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0906 19:31:59.157394   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0906 19:31:59.157404   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0906 19:31:59.157413   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0906 19:31:59.157423   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0906 19:31:59.157435   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0906 19:31:59.157448   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0906 19:31:59.157459   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0906 19:31:59.157505   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:31:59.157532   44141 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:31:59.157541   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:31:59.157572   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:31:59.157594   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:31:59.157621   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:31:59.157662   44141 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:31:59.157687   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.157701   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem -> /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.157714   44141 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.158260   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:31:59.184010   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:31:59.208214   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:31:59.231717   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:31:59.255891   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 19:31:59.279017   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 19:31:59.302259   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:31:59.325499   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/multinode-002640/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 19:31:59.350079   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:31:59.373448   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:31:59.396812   44141 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:31:59.420141   44141 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:31:59.436981   44141 ssh_runner.go:195] Run: openssl version
	I0906 19:31:59.442695   44141 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0906 19:31:59.442838   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:31:59.453510   44141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.457987   44141 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.458044   44141 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.458094   44141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:31:59.463621   44141 command_runner.go:130] > 51391683
	I0906 19:31:59.463683   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:31:59.472912   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:31:59.483578   44141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.487907   44141 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.487950   44141 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.487992   44141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:31:59.493550   44141 command_runner.go:130] > 3ec20f2e
	I0906 19:31:59.493601   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:31:59.502553   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:31:59.512956   44141 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.517456   44141 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.517474   44141 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.517553   44141 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:31:59.523022   44141 command_runner.go:130] > b5213941
	I0906 19:31:59.523157   44141 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:31:59.532459   44141 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:31:59.537047   44141 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:31:59.537063   44141 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0906 19:31:59.537069   44141 command_runner.go:130] > Device: 253,1	Inode: 5244438     Links: 1
	I0906 19:31:59.537079   44141 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0906 19:31:59.537087   44141 command_runner.go:130] > Access: 2024-09-06 19:25:08.799732055 +0000
	I0906 19:31:59.537098   44141 command_runner.go:130] > Modify: 2024-09-06 19:25:08.799732055 +0000
	I0906 19:31:59.537108   44141 command_runner.go:130] > Change: 2024-09-06 19:25:08.799732055 +0000
	I0906 19:31:59.537115   44141 command_runner.go:130] >  Birth: 2024-09-06 19:25:08.799732055 +0000
	I0906 19:31:59.537305   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 19:31:59.542760   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.542828   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 19:31:59.548242   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.548301   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 19:31:59.553827   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.553889   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 19:31:59.559140   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.559195   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 19:31:59.564477   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.564608   44141 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 19:31:59.570063   44141 command_runner.go:130] > Certificate will not expire
	I0906 19:31:59.570131   44141 kubeadm.go:392] StartCluster: {Name:multinode-002640 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-002640 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.12 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.52 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fal
se kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:31:59.570282   44141 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:31:59.570344   44141 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:31:59.607628   44141 command_runner.go:130] > fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306
	I0906 19:31:59.607656   44141 command_runner.go:130] > e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2
	I0906 19:31:59.607665   44141 command_runner.go:130] > 9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb
	I0906 19:31:59.607675   44141 command_runner.go:130] > 7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809
	I0906 19:31:59.607684   44141 command_runner.go:130] > 826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34
	I0906 19:31:59.607692   44141 command_runner.go:130] > 9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef
	I0906 19:31:59.607698   44141 command_runner.go:130] > 3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a
	I0906 19:31:59.607707   44141 command_runner.go:130] > bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7
	I0906 19:31:59.607731   44141 cri.go:89] found id: "fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306"
	I0906 19:31:59.607740   44141 cri.go:89] found id: "e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2"
	I0906 19:31:59.607745   44141 cri.go:89] found id: "9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb"
	I0906 19:31:59.607751   44141 cri.go:89] found id: "7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809"
	I0906 19:31:59.607756   44141 cri.go:89] found id: "826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34"
	I0906 19:31:59.607762   44141 cri.go:89] found id: "9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef"
	I0906 19:31:59.607766   44141 cri.go:89] found id: "3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a"
	I0906 19:31:59.607771   44141 cri.go:89] found id: "bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7"
	I0906 19:31:59.607775   44141 cri.go:89] found id: ""
	I0906 19:31:59.607824   44141 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.811469722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651364811444538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d10ee89d-6beb-4a44-a9d6-e7fd10d07911 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.812109350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=751d15a4-6018-420c-8f4d-68b2067a9dcb name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.812183176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=751d15a4-6018-420c-8f4d-68b2067a9dcb name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.812547166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a724a02d1d40e5ce9a6c144791452eac8032625ce77e796e6b068cc6d4fee007,PodSandboxId:cd7c8d5d20cb949d0a2098c159e425b1465dc35176c1a5d93aa1339c250f4c73,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725651160604751317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683,PodSandboxId:03b3e59c782e6b60b4f8e91f6455fb0e2941b2609f6fd6788f8ea5bd8918e739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725651127221258601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8,PodSandboxId:f010aa4d0b2c7274691eaa560cf9055940d6a875a1d5fdb5ff88e77d7844b728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725651127046063004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709,PodSandboxId:b3e165b4e546623ec3427e09df49f0cd10951946f101d740a7bf921259c3d47a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725651127080014813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01df9f85f91f3c09c29cef210087503759c1831fe0be1940125ebb223e539050,PodSandboxId:5a8b4eb90966626a398ca99fd1152cca34f9e41d801268c263d0d6c1921e5293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651126927621833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b,PodSandboxId:5900e25fc73ed360f1a2251bf6e848b988a8ce0fdf9531282ccde92236bdbe73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725651122138720708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206,PodSandboxId:47a10633a7f270eabd5618c815faa446a3b32087337e8fe36269ec7e0e8b8860,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725651122141356596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b6109aa0b6cd8,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d,PodSandboxId:e0782b60f43d8c904a7f59d2495460ee84dd9e07a2517cc2a7b61868c16b1d9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725651122076797324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359,PodSandboxId:7979ed526635e3afb43b4cb6cc1c63bf3ac1c5b87214f3dde33816e5c92b31ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725651122080148992,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599f6b25cf823d6044aa665b8e9bc2c6e4faba8efe29ec35d087f566b823b714,PodSandboxId:3a30bbfc3e3618482edc9f0a89bfbd18207ef721b2c91b55ddb3ed5574527e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725650795294702195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306,PodSandboxId:b16fb217c884cf5a9d162f808828c1891087eed4d6f6e4e70fee289b8ae30cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725650738982961223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2,PodSandboxId:32dbc56fdd7f91261e21c840963d710e5dd3e0052be2b608a6ca7059bbd4eb1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725650737991455735,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb,PodSandboxId:a8c7e71e0bc17c9a9ca078b47a637b04fd21e07d445ed12ce103f8b84d71b55c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725650726219277305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809,PodSandboxId:fa6584d7fed346b8aac6705d4a51a92ff51ecb0742f8c84b41af16a5cbc9c0b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725650724248476769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34,PodSandboxId:6b861cae653621a5aedc6992414b7a1dd0b05af1d1c743cbc802cf5819174d6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725650713191454247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef,PodSandboxId:b923cc24dbfcfec6bbe3d71b1547e36b79b87194d8c7358f5b0e858f951d664a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725650713150043703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b
6109aa0b6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a,PodSandboxId:f2615377e2f23859cec623c64c4fa55073633dd556af9221cd1fccc9b3a9ebf1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725650713124164633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7,PodSandboxId:8d2a2c6dc681a6692ee267d420e70da248b9630bd9819e00b9e280468355a68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650713079407097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=751d15a4-6018-420c-8f4d-68b2067a9dcb name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.853890192Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3128bc30-1985-4dc7-96e9-f7fbc8e9b8d6 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.853981469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3128bc30-1985-4dc7-96e9-f7fbc8e9b8d6 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.854891301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d914f1c0-e76c-4e0a-8a7b-311a392d6675 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.855308916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651364855286228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d914f1c0-e76c-4e0a-8a7b-311a392d6675 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.855898833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b38dede4-3b0f-4474-bdad-4cfe6073eb54 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.855981626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b38dede4-3b0f-4474-bdad-4cfe6073eb54 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.856375914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a724a02d1d40e5ce9a6c144791452eac8032625ce77e796e6b068cc6d4fee007,PodSandboxId:cd7c8d5d20cb949d0a2098c159e425b1465dc35176c1a5d93aa1339c250f4c73,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725651160604751317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683,PodSandboxId:03b3e59c782e6b60b4f8e91f6455fb0e2941b2609f6fd6788f8ea5bd8918e739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725651127221258601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8,PodSandboxId:f010aa4d0b2c7274691eaa560cf9055940d6a875a1d5fdb5ff88e77d7844b728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725651127046063004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709,PodSandboxId:b3e165b4e546623ec3427e09df49f0cd10951946f101d740a7bf921259c3d47a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725651127080014813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01df9f85f91f3c09c29cef210087503759c1831fe0be1940125ebb223e539050,PodSandboxId:5a8b4eb90966626a398ca99fd1152cca34f9e41d801268c263d0d6c1921e5293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651126927621833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b,PodSandboxId:5900e25fc73ed360f1a2251bf6e848b988a8ce0fdf9531282ccde92236bdbe73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725651122138720708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206,PodSandboxId:47a10633a7f270eabd5618c815faa446a3b32087337e8fe36269ec7e0e8b8860,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725651122141356596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b6109aa0b6cd8,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d,PodSandboxId:e0782b60f43d8c904a7f59d2495460ee84dd9e07a2517cc2a7b61868c16b1d9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725651122076797324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359,PodSandboxId:7979ed526635e3afb43b4cb6cc1c63bf3ac1c5b87214f3dde33816e5c92b31ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725651122080148992,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599f6b25cf823d6044aa665b8e9bc2c6e4faba8efe29ec35d087f566b823b714,PodSandboxId:3a30bbfc3e3618482edc9f0a89bfbd18207ef721b2c91b55ddb3ed5574527e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725650795294702195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306,PodSandboxId:b16fb217c884cf5a9d162f808828c1891087eed4d6f6e4e70fee289b8ae30cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725650738982961223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2,PodSandboxId:32dbc56fdd7f91261e21c840963d710e5dd3e0052be2b608a6ca7059bbd4eb1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725650737991455735,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb,PodSandboxId:a8c7e71e0bc17c9a9ca078b47a637b04fd21e07d445ed12ce103f8b84d71b55c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725650726219277305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809,PodSandboxId:fa6584d7fed346b8aac6705d4a51a92ff51ecb0742f8c84b41af16a5cbc9c0b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725650724248476769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34,PodSandboxId:6b861cae653621a5aedc6992414b7a1dd0b05af1d1c743cbc802cf5819174d6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725650713191454247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef,PodSandboxId:b923cc24dbfcfec6bbe3d71b1547e36b79b87194d8c7358f5b0e858f951d664a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725650713150043703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b
6109aa0b6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a,PodSandboxId:f2615377e2f23859cec623c64c4fa55073633dd556af9221cd1fccc9b3a9ebf1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725650713124164633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7,PodSandboxId:8d2a2c6dc681a6692ee267d420e70da248b9630bd9819e00b9e280468355a68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650713079407097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b38dede4-3b0f-4474-bdad-4cfe6073eb54 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.902575306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb0c4d1e-b069-4131-a334-dfc76e1c4189 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.902718850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb0c4d1e-b069-4131-a334-dfc76e1c4189 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.904124864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0e23a7c-42bd-4ad6-a741-0c1e62b44edf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.904560040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651364904536665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0e23a7c-42bd-4ad6-a741-0c1e62b44edf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.905183843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de55a879-53f5-47c1-9421-d23f97cddca3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.905250255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de55a879-53f5-47c1-9421-d23f97cddca3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.905576183Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a724a02d1d40e5ce9a6c144791452eac8032625ce77e796e6b068cc6d4fee007,PodSandboxId:cd7c8d5d20cb949d0a2098c159e425b1465dc35176c1a5d93aa1339c250f4c73,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725651160604751317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683,PodSandboxId:03b3e59c782e6b60b4f8e91f6455fb0e2941b2609f6fd6788f8ea5bd8918e739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725651127221258601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8,PodSandboxId:f010aa4d0b2c7274691eaa560cf9055940d6a875a1d5fdb5ff88e77d7844b728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725651127046063004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709,PodSandboxId:b3e165b4e546623ec3427e09df49f0cd10951946f101d740a7bf921259c3d47a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725651127080014813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01df9f85f91f3c09c29cef210087503759c1831fe0be1940125ebb223e539050,PodSandboxId:5a8b4eb90966626a398ca99fd1152cca34f9e41d801268c263d0d6c1921e5293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651126927621833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b,PodSandboxId:5900e25fc73ed360f1a2251bf6e848b988a8ce0fdf9531282ccde92236bdbe73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725651122138720708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206,PodSandboxId:47a10633a7f270eabd5618c815faa446a3b32087337e8fe36269ec7e0e8b8860,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725651122141356596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b6109aa0b6cd8,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d,PodSandboxId:e0782b60f43d8c904a7f59d2495460ee84dd9e07a2517cc2a7b61868c16b1d9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725651122076797324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359,PodSandboxId:7979ed526635e3afb43b4cb6cc1c63bf3ac1c5b87214f3dde33816e5c92b31ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725651122080148992,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599f6b25cf823d6044aa665b8e9bc2c6e4faba8efe29ec35d087f566b823b714,PodSandboxId:3a30bbfc3e3618482edc9f0a89bfbd18207ef721b2c91b55ddb3ed5574527e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725650795294702195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306,PodSandboxId:b16fb217c884cf5a9d162f808828c1891087eed4d6f6e4e70fee289b8ae30cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725650738982961223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2,PodSandboxId:32dbc56fdd7f91261e21c840963d710e5dd3e0052be2b608a6ca7059bbd4eb1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725650737991455735,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb,PodSandboxId:a8c7e71e0bc17c9a9ca078b47a637b04fd21e07d445ed12ce103f8b84d71b55c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725650726219277305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809,PodSandboxId:fa6584d7fed346b8aac6705d4a51a92ff51ecb0742f8c84b41af16a5cbc9c0b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725650724248476769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34,PodSandboxId:6b861cae653621a5aedc6992414b7a1dd0b05af1d1c743cbc802cf5819174d6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725650713191454247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef,PodSandboxId:b923cc24dbfcfec6bbe3d71b1547e36b79b87194d8c7358f5b0e858f951d664a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725650713150043703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b
6109aa0b6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a,PodSandboxId:f2615377e2f23859cec623c64c4fa55073633dd556af9221cd1fccc9b3a9ebf1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725650713124164633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7,PodSandboxId:8d2a2c6dc681a6692ee267d420e70da248b9630bd9819e00b9e280468355a68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650713079407097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de55a879-53f5-47c1-9421-d23f97cddca3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.952393518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=099f1e00-0613-4d8c-9be7-840590c99846 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.952485297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=099f1e00-0613-4d8c-9be7-840590c99846 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.953983128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6c10406-eb13-4206-be35-4c47e239e3c7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.954431659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651364954407579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6c10406-eb13-4206-be35-4c47e239e3c7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.954926104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5b173f1-faf6-4870-ae56-30c642b0ba01 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.955008537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5b173f1-faf6-4870-ae56-30c642b0ba01 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:36:04 multinode-002640 crio[2738]: time="2024-09-06 19:36:04.955329748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a724a02d1d40e5ce9a6c144791452eac8032625ce77e796e6b068cc6d4fee007,PodSandboxId:cd7c8d5d20cb949d0a2098c159e425b1465dc35176c1a5d93aa1339c250f4c73,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1725651160604751317,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683,PodSandboxId:03b3e59c782e6b60b4f8e91f6455fb0e2941b2609f6fd6788f8ea5bd8918e739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1725651127221258601,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8,PodSandboxId:f010aa4d0b2c7274691eaa560cf9055940d6a875a1d5fdb5ff88e77d7844b728,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725651127046063004,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709,PodSandboxId:b3e165b4e546623ec3427e09df49f0cd10951946f101d740a7bf921259c3d47a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725651127080014813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01df9f85f91f3c09c29cef210087503759c1831fe0be1940125ebb223e539050,PodSandboxId:5a8b4eb90966626a398ca99fd1152cca34f9e41d801268c263d0d6c1921e5293,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651126927621833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b,PodSandboxId:5900e25fc73ed360f1a2251bf6e848b988a8ce0fdf9531282ccde92236bdbe73,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725651122138720708,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206,PodSandboxId:47a10633a7f270eabd5618c815faa446a3b32087337e8fe36269ec7e0e8b8860,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725651122141356596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b6109aa0b6cd8,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d,PodSandboxId:e0782b60f43d8c904a7f59d2495460ee84dd9e07a2517cc2a7b61868c16b1d9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725651122076797324,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359,PodSandboxId:7979ed526635e3afb43b4cb6cc1c63bf3ac1c5b87214f3dde33816e5c92b31ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725651122080148992,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599f6b25cf823d6044aa665b8e9bc2c6e4faba8efe29ec35d087f566b823b714,PodSandboxId:3a30bbfc3e3618482edc9f0a89bfbd18207ef721b2c91b55ddb3ed5574527e5b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1725650795294702195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lmdp2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c0be4d0-1c53-4144-ad7c-d806c021b7a9,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306,PodSandboxId:b16fb217c884cf5a9d162f808828c1891087eed4d6f6e4e70fee289b8ae30cfa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725650738982961223,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r9zn7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8fe242e6-a5a0-4da8-8772-bf1394fdc942,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1baf3591f8658a01e3dceba30d428aef4dd7ca237973478fc9ff37669ec4bf2,PodSandboxId:32dbc56fdd7f91261e21c840963d710e5dd3e0052be2b608a6ca7059bbd4eb1d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725650737991455735,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 4bb70bf8-eeca-4508-a590-4e2c5aa927bf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb,PodSandboxId:a8c7e71e0bc17c9a9ca078b47a637b04fd21e07d445ed12ce103f8b84d71b55c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1725650726219277305,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6jxr2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 96166804-e885-4f84-aecd-a0b3bda8337f,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809,PodSandboxId:fa6584d7fed346b8aac6705d4a51a92ff51ecb0742f8c84b41af16a5cbc9c0b3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725650724248476769,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2p8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: cf4a34c4-5c71-49b8-9adc-c0cb2d745d6e,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34,PodSandboxId:6b861cae653621a5aedc6992414b7a1dd0b05af1d1c743cbc802cf5819174d6e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725650713191454247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270747f739d4ddf280a4a7ba1a5a608f
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef,PodSandboxId:b923cc24dbfcfec6bbe3d71b1547e36b79b87194d8c7358f5b0e858f951d664a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725650713150043703,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b4553ea27491810d24b
6109aa0b6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a,PodSandboxId:f2615377e2f23859cec623c64c4fa55073633dd556af9221cd1fccc9b3a9ebf1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725650713124164633,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd16a4366ababa094dd9841805105e1f,},
Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7,PodSandboxId:8d2a2c6dc681a6692ee267d420e70da248b9630bd9819e00b9e280468355a68f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725650713079407097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-002640,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c279ebb7122d06252ca9a31d4f8602a,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5b173f1-faf6-4870-ae56-30c642b0ba01 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a724a02d1d40e       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   cd7c8d5d20cb9       busybox-7dff88458-lmdp2
	fa57d09d65ee9       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago       Running             kindnet-cni               1                   03b3e59c782e6       kindnet-6jxr2
	9ff3fc7226613       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   b3e165b4e5466       coredns-6f6b679f8f-r9zn7
	60903012261f8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      3 minutes ago       Running             kube-proxy                1                   f010aa4d0b2c7       kube-proxy-k2p8s
	01df9f85f91f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   5a8b4eb909666       storage-provisioner
	5b26eb0c19801       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   47a10633a7f27       kube-controller-manager-multinode-002640
	b2734c24e5585       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   5900e25fc73ed       kube-scheduler-multinode-002640
	e7c2f51ba024b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   7979ed526635e       etcd-multinode-002640
	f250ca60b2d27       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   e0782b60f43d8       kube-apiserver-multinode-002640
	599f6b25cf823       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   3a30bbfc3e361       busybox-7dff88458-lmdp2
	fefbece35c814       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   b16fb217c884c       coredns-6f6b679f8f-r9zn7
	e1baf3591f865       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   32dbc56fdd7f9       storage-provisioner
	9f4b7c0789cdb       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   a8c7e71e0bc17       kindnet-6jxr2
	7a97ccf9e25bd       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   fa6584d7fed34       kube-proxy-k2p8s
	826cb5eabec2d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   6b861cae65362       etcd-multinode-002640
	9457839bc33e5       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   b923cc24dbfcf       kube-controller-manager-multinode-002640
	3a7bc4e5358db       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   f2615377e2f23       kube-scheduler-multinode-002640
	bc1c460c83658       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   8d2a2c6dc681a       kube-apiserver-multinode-002640
	
	
	==> coredns [9ff3fc7226613fe452a21e19a6048bd6e7dfbc83e99311374336ad933046d709] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40355 - 44850 "HINFO IN 4530067088066444664.7866583828330175151. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016266096s
	
	
	==> coredns [fefbece35c814ffe5311d4343a59efda2e6cec9f99da02c26ccf57d98f6b0306] <==
	[INFO] 10.244.0.3:37220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001842971s
	[INFO] 10.244.0.3:54089 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000052774s
	[INFO] 10.244.0.3:54430 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175648s
	[INFO] 10.244.0.3:41666 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001312607s
	[INFO] 10.244.0.3:52046 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000037123s
	[INFO] 10.244.0.3:53322 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212003s
	[INFO] 10.244.0.3:40781 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034851s
	[INFO] 10.244.1.2:44728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150045s
	[INFO] 10.244.1.2:36902 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197259s
	[INFO] 10.244.1.2:37177 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082454s
	[INFO] 10.244.1.2:43564 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086899s
	[INFO] 10.244.0.3:37835 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141518s
	[INFO] 10.244.0.3:55848 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012575s
	[INFO] 10.244.0.3:57984 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075199s
	[INFO] 10.244.0.3:52551 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123644s
	[INFO] 10.244.1.2:59686 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117513s
	[INFO] 10.244.1.2:40137 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000291439s
	[INFO] 10.244.1.2:48575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014294s
	[INFO] 10.244.1.2:46149 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000156939s
	[INFO] 10.244.0.3:38524 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207642s
	[INFO] 10.244.0.3:36093 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104122s
	[INFO] 10.244.0.3:33620 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077408s
	[INFO] 10.244.0.3:41967 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148167s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-002640
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-002640
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-002640
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T19_25_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:25:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-002640
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:36:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:32:05 +0000   Fri, 06 Sep 2024 19:25:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:32:05 +0000   Fri, 06 Sep 2024 19:25:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:32:05 +0000   Fri, 06 Sep 2024 19:25:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:32:05 +0000   Fri, 06 Sep 2024 19:25:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    multinode-002640
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3607ea6983d145bc88300af15ddf5220
	  System UUID:                3607ea69-83d1-45bc-8830-0af15ddf5220
	  Boot ID:                    3b3ddd88-c018-4c71-9ab0-6dfe28885d9c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lmdp2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m32s
	  kube-system                 coredns-6f6b679f8f-r9zn7                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-002640                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-6jxr2                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-002640             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-002640    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-k2p8s                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-002640             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-002640 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-002640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-002640 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-002640 event: Registered Node multinode-002640 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-002640 status is now: NodeReady
	  Normal  Starting                 4m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node multinode-002640 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node multinode-002640 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node multinode-002640 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m57s                node-controller  Node multinode-002640 event: Registered Node multinode-002640 in Controller
	
	
	Name:               multinode-002640-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-002640-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=multinode-002640
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_06T19_32_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:32:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-002640-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:33:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 06 Sep 2024 19:33:14 +0000   Fri, 06 Sep 2024 19:34:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 06 Sep 2024 19:33:14 +0000   Fri, 06 Sep 2024 19:34:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 06 Sep 2024 19:33:14 +0000   Fri, 06 Sep 2024 19:34:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 06 Sep 2024 19:33:14 +0000   Fri, 06 Sep 2024 19:34:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    multinode-002640-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f648484cde984cd4ab7f7b70a35d7214
	  System UUID:                f648484c-de98-4cd4-ab7f-7b70a35d7214
	  Boot ID:                    63abbd1b-aec2-430b-8dad-8475ed083e6a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7qc4m    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 kindnet-7lg7n              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m54s
	  kube-system                 kube-proxy-8dfs6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m47s                  kube-proxy       
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m54s (x2 over 9m54s)  kubelet          Node multinode-002640-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m54s (x2 over 9m54s)  kubelet          Node multinode-002640-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m54s (x2 over 9m54s)  kubelet          Node multinode-002640-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m35s                  kubelet          Node multinode-002640-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m22s)  kubelet          Node multinode-002640-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m22s)  kubelet          Node multinode-002640-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m22s)  kubelet          Node multinode-002640-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m17s                  node-controller  Node multinode-002640-m02 event: Registered Node multinode-002640-m02 in Controller
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-002640-m02 status is now: NodeReady
	  Normal  NodeNotReady             106s                   node-controller  Node multinode-002640-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062467] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.178574] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.148092] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.265951] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +3.928107] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +4.673964] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.061511] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.985333] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.089577] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.678389] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.101675] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.029032] kauditd_printk_skb: 60 callbacks suppressed
	[Sep 6 19:26] kauditd_printk_skb: 14 callbacks suppressed
	[Sep 6 19:31] systemd-fstab-generator[2662]: Ignoring "noauto" option for root device
	[  +0.160453] systemd-fstab-generator[2674]: Ignoring "noauto" option for root device
	[  +0.169870] systemd-fstab-generator[2689]: Ignoring "noauto" option for root device
	[  +0.132379] systemd-fstab-generator[2701]: Ignoring "noauto" option for root device
	[  +0.268073] systemd-fstab-generator[2729]: Ignoring "noauto" option for root device
	[  +7.861502] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.085312] kauditd_printk_skb: 100 callbacks suppressed
	[Sep 6 19:32] systemd-fstab-generator[2943]: Ignoring "noauto" option for root device
	[  +5.643569] kauditd_printk_skb: 74 callbacks suppressed
	[  +9.677317] kauditd_printk_skb: 34 callbacks suppressed
	[  +3.218143] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[ +20.869624] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [826cb5eabec2d4e2d6abb679eda2ad3340fe6fbb64a2716dd7bffc6475843a34] <==
	{"level":"info","ts":"2024-09-06T19:25:14.530846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became leader at term 2"}
	{"level":"info","ts":"2024-09-06T19:25:14.530853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b546310005a4f8aa elected leader b546310005a4f8aa at term 2"}
	{"level":"info","ts":"2024-09-06T19:25:14.540791Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:25:14.542857Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b546310005a4f8aa","local-member-attributes":"{Name:multinode-002640 ClientURLs:[https://192.168.39.11:2379]}","request-path":"/0/members/b546310005a4f8aa/attributes","cluster-id":"7cea85d65aab3581","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:25:14.543074Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:25:14.543284Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cea85d65aab3581","local-member-id":"b546310005a4f8aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:25:14.546749Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:25:14.546821Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:25:14.547475Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:25:14.543399Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:25:14.545688Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:25:14.552464Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.11:2379"}
	{"level":"info","ts":"2024-09-06T19:25:14.555328Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:25:14.559000Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:25:14.560117Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:30:19.249150Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-06T19:30:19.249289Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-002640","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.11:2380"],"advertise-client-urls":["https://192.168.39.11:2379"]}
	{"level":"warn","ts":"2024-09-06T19:30:19.249417Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:30:19.249512Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:30:19.331394Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.11:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:30:19.331445Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.11:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:30:19.332928Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b546310005a4f8aa","current-leader-member-id":"b546310005a4f8aa"}
	{"level":"info","ts":"2024-09-06T19:30:19.335232Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.11:2380"}
	{"level":"info","ts":"2024-09-06T19:30:19.335346Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.11:2380"}
	{"level":"info","ts":"2024-09-06T19:30:19.335355Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-002640","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.11:2380"],"advertise-client-urls":["https://192.168.39.11:2379"]}
	
	
	==> etcd [e7c2f51ba024b29a4ac10030f98d954eca576d5bd675ee511a3233fc18006359] <==
	{"level":"info","ts":"2024-09-06T19:32:02.452853Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:32:02.436896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa switched to configuration voters=(13062181645399161002)"}
	{"level":"info","ts":"2024-09-06T19:32:02.453282Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7cea85d65aab3581","local-member-id":"b546310005a4f8aa","added-peer-id":"b546310005a4f8aa","added-peer-peer-urls":["https://192.168.39.11:2380"]}
	{"level":"info","ts":"2024-09-06T19:32:02.453451Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b546310005a4f8aa","initial-advertise-peer-urls":["https://192.168.39.11:2380"],"listen-peer-urls":["https://192.168.39.11:2380"],"advertise-client-urls":["https://192.168.39.11:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.11:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T19:32:02.453716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cea85d65aab3581","local-member-id":"b546310005a4f8aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:32:02.436340Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-06T19:32:02.455733Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T19:32:02.467386Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:32:04.191627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:32:04.191727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:32:04.191766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa received MsgPreVoteResp from b546310005a4f8aa at term 2"}
	{"level":"info","ts":"2024-09-06T19:32:04.191786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:32:04.191792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa received MsgVoteResp from b546310005a4f8aa at term 3"}
	{"level":"info","ts":"2024-09-06T19:32:04.191800Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b546310005a4f8aa became leader at term 3"}
	{"level":"info","ts":"2024-09-06T19:32:04.191808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b546310005a4f8aa elected leader b546310005a4f8aa at term 3"}
	{"level":"info","ts":"2024-09-06T19:32:04.197436Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b546310005a4f8aa","local-member-attributes":"{Name:multinode-002640 ClientURLs:[https://192.168.39.11:2379]}","request-path":"/0/members/b546310005a4f8aa/attributes","cluster-id":"7cea85d65aab3581","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:32:04.197451Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:32:04.197684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:32:04.198088Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:32:04.198118Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:32:04.198953Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:32:04.200490Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.11:2379"}
	{"level":"info","ts":"2024-09-06T19:32:04.199254Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:32:04.201871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:33:27.159319Z","caller":"traceutil/trace.go:171","msg":"trace[788905189] transaction","detail":"{read_only:false; response_revision:1138; number_of_response:1; }","duration":"148.979422ms","start":"2024-09-06T19:33:27.010303Z","end":"2024-09-06T19:33:27.159283Z","steps":["trace[788905189] 'process raft request'  (duration: 104.564745ms)","trace[788905189] 'compare'  (duration: 43.963352ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:36:05 up 11 min,  0 users,  load average: 0.59, 0.35, 0.18
	Linux multinode-002640 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9f4b7c0789cdb9d6726a5f7ce29238944f897a14bd5a55ed606b1c37249822fb] <==
	I0906 19:29:37.200492       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:29:47.193593       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:29:47.193774       1 main.go:299] handling current node
	I0906 19:29:47.193811       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:29:47.193831       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:29:47.193964       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:29:47.193987       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:29:57.194072       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:29:57.194108       1 main.go:299] handling current node
	I0906 19:29:57.194129       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:29:57.194134       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:29:57.194275       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:29:57.194302       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:30:07.201163       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:30:07.201283       1 main.go:299] handling current node
	I0906 19:30:07.201314       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:30:07.201333       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:30:07.201476       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:30:07.201507       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:30:17.193348       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:30:17.193425       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:30:17.193582       1 main.go:295] Handling node with IPs: map[192.168.39.52:{}]
	I0906 19:30:17.193614       1 main.go:322] Node multinode-002640-m03 has CIDR [10.244.3.0/24] 
	I0906 19:30:17.193752       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:30:17.193785       1 main.go:299] handling current node
	
	
	==> kindnet [fa57d09d65ee9af85705b64493b569089b2f9110c13fa9e3d7fb316014a5b683] <==
	I0906 19:34:58.392955       1 main.go:299] handling current node
	I0906 19:35:08.389448       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:35:08.389555       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:35:08.389769       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:35:08.389798       1 main.go:299] handling current node
	I0906 19:35:18.389387       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:35:18.389483       1 main.go:299] handling current node
	I0906 19:35:18.389509       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:35:18.389520       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:35:28.389503       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:35:28.389748       1 main.go:299] handling current node
	I0906 19:35:28.389808       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:35:28.389828       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:35:38.397753       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:35:38.397786       1 main.go:299] handling current node
	I0906 19:35:38.397799       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:35:38.397804       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:35:48.395386       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:35:48.395440       1 main.go:299] handling current node
	I0906 19:35:48.395458       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:35:48.395462       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:35:58.396868       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0906 19:35:58.396983       1 main.go:322] Node multinode-002640-m02 has CIDR [10.244.1.0/24] 
	I0906 19:35:58.397134       1 main.go:295] Handling node with IPs: map[192.168.39.11:{}]
	I0906 19:35:58.397157       1 main.go:299] handling current node
	
	
	==> kube-apiserver [bc1c460c83658d3788086ca8ca1858109bfe2bc77f93c71d8e20e1b4ac9251e7] <==
	I0906 19:25:23.479935       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0906 19:25:23.580162       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0906 19:26:38.079366       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:43940: use of closed network connection
	E0906 19:26:38.292861       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:43954: use of closed network connection
	E0906 19:26:38.482862       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:43962: use of closed network connection
	E0906 19:26:38.648600       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:43988: use of closed network connection
	E0906 19:26:38.969711       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44040: use of closed network connection
	E0906 19:26:39.258048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44064: use of closed network connection
	E0906 19:26:39.432522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44090: use of closed network connection
	E0906 19:26:39.601921       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44106: use of closed network connection
	E0906 19:26:39.773220       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:44112: use of closed network connection
	I0906 19:30:19.247576       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0906 19:30:19.262288       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.263950       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0906 19:30:19.266458       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0906 19:30:19.268732       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.269022       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.269738       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.270357       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.275548       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.276136       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.276202       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.276259       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.276294       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:30:19.284942       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f250ca60b2d272b4ddb4d71770f5ee8e02754d75184750851522f584f6371c1d] <==
	I0906 19:32:05.530037       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0906 19:32:05.530997       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:32:05.531268       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:32:05.531301       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:32:05.531459       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:32:05.537069       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:32:05.541034       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:32:05.541071       1 policy_source.go:224] refreshing policies
	I0906 19:32:05.553147       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:32:05.554996       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:32:05.556125       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:32:05.556218       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:32:05.556234       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:32:05.556240       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:32:05.556245       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:32:05.557993       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E0906 19:32:05.576770       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0906 19:32:06.441716       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 19:32:07.984460       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 19:32:08.122877       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0906 19:32:08.133468       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0906 19:32:08.208286       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:32:08.214182       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 19:32:08.965558       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:32:09.143843       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5b26eb0c19801bdac34a1293c1d478898e046a00ddd5d65dd21c502fd6a95206] <==
	I0906 19:33:20.812030       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-002640-m03" podCIDRs=["10.244.2.0/24"]
	I0906 19:33:20.812118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:20.812182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:20.816416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:21.210307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:21.552152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:24.042783       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:30.828477       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:38.466439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:38.466755       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:33:38.479236       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:39.017738       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:43.187864       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:43.220415       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:43.662574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:33:43.662713       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:34:19.035107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:34:19.054715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:34:19.082597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.580343ms"
	I0906 19:34:19.082788       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="81.248µs"
	I0906 19:34:24.109252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:34:28.898424       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2hxnj"
	I0906 19:34:28.927973       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2hxnj"
	I0906 19:34:28.928016       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-67k7b"
	I0906 19:34:28.959940       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-67k7b"
	
	
	==> kube-controller-manager [9457839bc33e5d9c665583106ed2507b55b23ed47dd3102ca97f03750a432eef] <==
	I0906 19:27:55.740526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:55.975411       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:55.975526       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:27:57.133170       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:27:57.134534       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-002640-m03\" does not exist"
	I0906 19:27:57.152172       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-002640-m03" podCIDRs=["10.244.3.0/24"]
	I0906 19:27:57.152277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:57.152467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:57.500834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:57.689831       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:27:57.857139       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:07.480698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:13.838283       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m02"
	I0906 19:28:13.838454       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:13.849307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:17.600057       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:28:57.616554       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-002640-m03"
	I0906 19:28:57.617201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:28:57.633167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:28:57.673162       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.94992ms"
	I0906 19:28:57.674256       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.173µs"
	I0906 19:29:02.674959       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:29:02.694620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	I0906 19:29:02.720893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m02"
	I0906 19:29:12.800012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-002640-m03"
	
	
	==> kube-proxy [60903012261f8b49797fd5005d85b8b6897f9bcc1e07852670509faa428de3d8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:32:07.497741       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:32:07.579205       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.11"]
	E0906 19:32:07.579277       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:32:07.716683       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:32:07.716722       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:32:07.716755       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:32:07.725815       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:32:07.726270       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:32:07.726362       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:32:07.730459       1 config.go:197] "Starting service config controller"
	I0906 19:32:07.730601       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:32:07.730712       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:32:07.730758       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:32:07.731312       1 config.go:326] "Starting node config controller"
	I0906 19:32:07.731403       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:32:07.832827       1 shared_informer.go:320] Caches are synced for node config
	I0906 19:32:07.832871       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:32:07.832898       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [7a97ccf9e25bde202376882c4d6fe46719626efcf803dec89a0243112979e809] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:25:24.674371       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:25:24.685511       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.11"]
	E0906 19:25:24.685757       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:25:24.756918       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:25:24.756977       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:25:24.757017       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:25:24.763036       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:25:24.763358       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:25:24.763389       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:25:24.765141       1 config.go:197] "Starting service config controller"
	I0906 19:25:24.765181       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:25:24.765199       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:25:24.765204       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:25:24.766117       1 config.go:326] "Starting node config controller"
	I0906 19:25:24.766128       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:25:24.866074       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:25:24.866089       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:25:24.866206       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3a7bc4e5358dbbefe4d28e8036e83931f471c105fa34dd514add2a9d3487005a] <==
	W0906 19:25:15.844826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 19:25:15.844908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:15.844927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:25:15.847413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0906 19:25:15.847166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.739464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 19:25:16.739493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.761143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 19:25:16.761266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.839961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 19:25:16.840061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.859516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 19:25:16.859758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.905833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 19:25:16.906202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:16.998843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 19:25:16.998986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:17.028199       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 19:25:17.028329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:17.067222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 19:25:17.067359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 19:25:17.101839       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 19:25:17.101960       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 19:25:20.322444       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 19:30:19.248571       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b2734c24e5585361c206722b532e9e32d1f8be04b43de76b540f6613e035b51b] <==
	I0906 19:32:03.182191       1 serving.go:386] Generated self-signed cert in-memory
	W0906 19:32:05.480211       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:32:05.480237       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:32:05.480247       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:32:05.480257       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:32:05.539251       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 19:32:05.541732       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:32:05.562858       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 19:32:05.563140       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:32:05.563231       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:32:05.563345       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:32:05.663939       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 19:34:51 multinode-002640 kubelet[2950]: E0906 19:34:51.478381    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651291477609135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:01 multinode-002640 kubelet[2950]: E0906 19:35:01.451553    2950 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:35:01 multinode-002640 kubelet[2950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:35:01 multinode-002640 kubelet[2950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:35:01 multinode-002640 kubelet[2950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:35:01 multinode-002640 kubelet[2950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:35:01 multinode-002640 kubelet[2950]: E0906 19:35:01.480491    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651301479598690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:01 multinode-002640 kubelet[2950]: E0906 19:35:01.480514    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651301479598690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:11 multinode-002640 kubelet[2950]: E0906 19:35:11.483295    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651311483015638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:11 multinode-002640 kubelet[2950]: E0906 19:35:11.483343    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651311483015638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:21 multinode-002640 kubelet[2950]: E0906 19:35:21.487261    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651321486621609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:21 multinode-002640 kubelet[2950]: E0906 19:35:21.487309    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651321486621609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:31 multinode-002640 kubelet[2950]: E0906 19:35:31.489079    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651331488316603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:31 multinode-002640 kubelet[2950]: E0906 19:35:31.489102    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651331488316603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:41 multinode-002640 kubelet[2950]: E0906 19:35:41.492454    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651341491034250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:41 multinode-002640 kubelet[2950]: E0906 19:35:41.493017    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651341491034250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:51 multinode-002640 kubelet[2950]: E0906 19:35:51.494480    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651351494186995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:35:51 multinode-002640 kubelet[2950]: E0906 19:35:51.494521    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651351494186995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:36:01 multinode-002640 kubelet[2950]: E0906 19:36:01.450926    2950 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 19:36:01 multinode-002640 kubelet[2950]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 19:36:01 multinode-002640 kubelet[2950]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 19:36:01 multinode-002640 kubelet[2950]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 19:36:01 multinode-002640 kubelet[2950]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 19:36:01 multinode-002640 kubelet[2950]: E0906 19:36:01.498393    2950 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651361496045446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:36:01 multinode-002640 kubelet[2950]: E0906 19:36:01.498418    2950 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651361496045446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:36:04.550751   46089 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19576-6021/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-002640 -n multinode-002640
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-002640 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.29s)

                                                
                                    
x
+
TestPreload (162.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-767830 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-767830 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m31.358546006s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-767830 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-767830 image pull gcr.io/k8s-minikube/busybox: (2.091398567s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-767830
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-767830: (7.282788364s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-767830 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0906 19:41:44.178246   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-767830 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (58.501398729s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-767830 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-09-06 19:42:37.260439818 +0000 UTC m=+4403.750193347
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-767830 -n test-preload-767830
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-767830 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-767830 logs -n 25: (1.067404096s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640 sudo cat                                       | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m03_multinode-002640.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt                       | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m02:/home/docker/cp-test_multinode-002640-m03_multinode-002640-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n                                                                 | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | multinode-002640-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-002640 ssh -n multinode-002640-m02 sudo cat                                   | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-002640-m03_multinode-002640-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-002640 node stop m03                                                          | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:27 UTC |
	| node    | multinode-002640 node start                                                             | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:27 UTC | 06 Sep 24 19:28 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-002640                                                                | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:28 UTC |                     |
	| stop    | -p multinode-002640                                                                     | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:28 UTC |                     |
	| start   | -p multinode-002640                                                                     | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:30 UTC | 06 Sep 24 19:33 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-002640                                                                | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:33 UTC |                     |
	| node    | multinode-002640 node delete                                                            | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:33 UTC | 06 Sep 24 19:33 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-002640 stop                                                                   | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:33 UTC |                     |
	| start   | -p multinode-002640                                                                     | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:36 UTC | 06 Sep 24 19:39 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-002640                                                                | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:39 UTC |                     |
	| start   | -p multinode-002640-m02                                                                 | multinode-002640-m02 | jenkins | v1.34.0 | 06 Sep 24 19:39 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-002640-m03                                                                 | multinode-002640-m03 | jenkins | v1.34.0 | 06 Sep 24 19:39 UTC | 06 Sep 24 19:39 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-002640                                                                 | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:39 UTC |                     |
	| delete  | -p multinode-002640-m03                                                                 | multinode-002640-m03 | jenkins | v1.34.0 | 06 Sep 24 19:39 UTC | 06 Sep 24 19:39 UTC |
	| delete  | -p multinode-002640                                                                     | multinode-002640     | jenkins | v1.34.0 | 06 Sep 24 19:39 UTC | 06 Sep 24 19:39 UTC |
	| start   | -p test-preload-767830                                                                  | test-preload-767830  | jenkins | v1.34.0 | 06 Sep 24 19:39 UTC | 06 Sep 24 19:41 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-767830 image pull                                                          | test-preload-767830  | jenkins | v1.34.0 | 06 Sep 24 19:41 UTC | 06 Sep 24 19:41 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-767830                                                                  | test-preload-767830  | jenkins | v1.34.0 | 06 Sep 24 19:41 UTC | 06 Sep 24 19:41 UTC |
	| start   | -p test-preload-767830                                                                  | test-preload-767830  | jenkins | v1.34.0 | 06 Sep 24 19:41 UTC | 06 Sep 24 19:42 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-767830 image list                                                          | test-preload-767830  | jenkins | v1.34.0 | 06 Sep 24 19:42 UTC | 06 Sep 24 19:42 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 19:41:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:41:38.574812   48447 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:41:38.574911   48447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:41:38.574920   48447 out.go:358] Setting ErrFile to fd 2...
	I0906 19:41:38.574925   48447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:41:38.575088   48447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:41:38.575585   48447 out.go:352] Setting JSON to false
	I0906 19:41:38.576399   48447 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5048,"bootTime":1725646651,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:41:38.576455   48447 start.go:139] virtualization: kvm guest
	I0906 19:41:38.578465   48447 out.go:177] * [test-preload-767830] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:41:38.579822   48447 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:41:38.579821   48447 notify.go:220] Checking for updates...
	I0906 19:41:38.581074   48447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:41:38.582263   48447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:41:38.583557   48447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:41:38.584599   48447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:41:38.585625   48447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:41:38.587068   48447 config.go:182] Loaded profile config "test-preload-767830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0906 19:41:38.587453   48447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:41:38.587507   48447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:41:38.602048   48447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0906 19:41:38.602421   48447 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:41:38.602943   48447 main.go:141] libmachine: Using API Version  1
	I0906 19:41:38.602965   48447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:41:38.603263   48447 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:41:38.603436   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:41:38.605095   48447 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 19:41:38.606207   48447 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:41:38.606507   48447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:41:38.606543   48447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:41:38.620468   48447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35623
	I0906 19:41:38.620782   48447 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:41:38.621203   48447 main.go:141] libmachine: Using API Version  1
	I0906 19:41:38.621222   48447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:41:38.621499   48447 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:41:38.621674   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:41:38.655713   48447 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 19:41:38.656979   48447 start.go:297] selected driver: kvm2
	I0906 19:41:38.656994   48447 start.go:901] validating driver "kvm2" against &{Name:test-preload-767830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-767830
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:41:38.657094   48447 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:41:38.657768   48447 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:41:38.657838   48447 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:41:38.672382   48447 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:41:38.672678   48447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:41:38.672710   48447 cni.go:84] Creating CNI manager for ""
	I0906 19:41:38.672724   48447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:41:38.672772   48447 start.go:340] cluster config:
	{Name:test-preload-767830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-767830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:41:38.672877   48447 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:41:38.675603   48447 out.go:177] * Starting "test-preload-767830" primary control-plane node in "test-preload-767830" cluster
	I0906 19:41:38.676806   48447 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0906 19:41:38.699084   48447 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0906 19:41:38.699103   48447 cache.go:56] Caching tarball of preloaded images
	I0906 19:41:38.699228   48447 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0906 19:41:38.700774   48447 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0906 19:41:38.701825   48447 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0906 19:41:38.731130   48447 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0906 19:41:42.465806   48447 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0906 19:41:42.465913   48447 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0906 19:41:43.302574   48447 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0906 19:41:43.302708   48447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/config.json ...
	I0906 19:41:43.302928   48447 start.go:360] acquireMachinesLock for test-preload-767830: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:41:43.302984   48447 start.go:364] duration metric: took 36.641µs to acquireMachinesLock for "test-preload-767830"
	I0906 19:41:43.302998   48447 start.go:96] Skipping create...Using existing machine configuration
	I0906 19:41:43.303003   48447 fix.go:54] fixHost starting: 
	I0906 19:41:43.303305   48447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:41:43.303338   48447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:41:43.317779   48447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38133
	I0906 19:41:43.318196   48447 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:41:43.318618   48447 main.go:141] libmachine: Using API Version  1
	I0906 19:41:43.318641   48447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:41:43.318894   48447 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:41:43.319077   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:41:43.319187   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetState
	I0906 19:41:43.320679   48447 fix.go:112] recreateIfNeeded on test-preload-767830: state=Stopped err=<nil>
	I0906 19:41:43.320701   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	W0906 19:41:43.320850   48447 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 19:41:43.323821   48447 out.go:177] * Restarting existing kvm2 VM for "test-preload-767830" ...
	I0906 19:41:43.324947   48447 main.go:141] libmachine: (test-preload-767830) Calling .Start
	I0906 19:41:43.325101   48447 main.go:141] libmachine: (test-preload-767830) Ensuring networks are active...
	I0906 19:41:43.325768   48447 main.go:141] libmachine: (test-preload-767830) Ensuring network default is active
	I0906 19:41:43.326027   48447 main.go:141] libmachine: (test-preload-767830) Ensuring network mk-test-preload-767830 is active
	I0906 19:41:43.326300   48447 main.go:141] libmachine: (test-preload-767830) Getting domain xml...
	I0906 19:41:43.326995   48447 main.go:141] libmachine: (test-preload-767830) Creating domain...
	I0906 19:41:44.517661   48447 main.go:141] libmachine: (test-preload-767830) Waiting to get IP...
	I0906 19:41:44.518476   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:44.518809   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:44.518859   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:44.518788   48498 retry.go:31] will retry after 194.907726ms: waiting for machine to come up
	I0906 19:41:44.715290   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:44.715759   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:44.715790   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:44.715706   48498 retry.go:31] will retry after 301.941193ms: waiting for machine to come up
	I0906 19:41:45.019109   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:45.019496   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:45.019529   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:45.019446   48498 retry.go:31] will retry after 442.770096ms: waiting for machine to come up
	I0906 19:41:45.463932   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:45.464360   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:45.464390   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:45.464314   48498 retry.go:31] will retry after 517.495111ms: waiting for machine to come up
	I0906 19:41:45.982919   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:45.983380   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:45.983409   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:45.983340   48498 retry.go:31] will retry after 721.559364ms: waiting for machine to come up
	I0906 19:41:46.706332   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:46.706737   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:46.706758   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:46.706696   48498 retry.go:31] will retry after 718.071863ms: waiting for machine to come up
	I0906 19:41:47.426621   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:47.426994   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:47.427020   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:47.426941   48498 retry.go:31] will retry after 733.156562ms: waiting for machine to come up
	I0906 19:41:48.161898   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:48.162256   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:48.162279   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:48.162217   48498 retry.go:31] will retry after 1.220040768s: waiting for machine to come up
	I0906 19:41:49.383948   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:49.384372   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:49.384395   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:49.384328   48498 retry.go:31] will retry after 1.844606326s: waiting for machine to come up
	I0906 19:41:51.231220   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:51.231698   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:51.231732   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:51.231643   48498 retry.go:31] will retry after 2.321538221s: waiting for machine to come up
	I0906 19:41:53.554575   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:53.555035   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:53.555060   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:53.554988   48498 retry.go:31] will retry after 2.412407956s: waiting for machine to come up
	I0906 19:41:55.970449   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:55.970774   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:55.970802   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:55.970741   48498 retry.go:31] will retry after 3.046247884s: waiting for machine to come up
	I0906 19:41:59.018961   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:41:59.019338   48447 main.go:141] libmachine: (test-preload-767830) DBG | unable to find current IP address of domain test-preload-767830 in network mk-test-preload-767830
	I0906 19:41:59.019363   48447 main.go:141] libmachine: (test-preload-767830) DBG | I0906 19:41:59.019303   48498 retry.go:31] will retry after 3.126012503s: waiting for machine to come up
	I0906 19:42:02.148600   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.149018   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has current primary IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.149032   48447 main.go:141] libmachine: (test-preload-767830) Found IP for machine: 192.168.39.40
	I0906 19:42:02.149069   48447 main.go:141] libmachine: (test-preload-767830) Reserving static IP address...
	I0906 19:42:02.149401   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "test-preload-767830", mac: "52:54:00:ae:af:41", ip: "192.168.39.40"} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.149415   48447 main.go:141] libmachine: (test-preload-767830) Reserved static IP address: 192.168.39.40
	I0906 19:42:02.149426   48447 main.go:141] libmachine: (test-preload-767830) DBG | skip adding static IP to network mk-test-preload-767830 - found existing host DHCP lease matching {name: "test-preload-767830", mac: "52:54:00:ae:af:41", ip: "192.168.39.40"}
	I0906 19:42:02.149450   48447 main.go:141] libmachine: (test-preload-767830) Waiting for SSH to be available...
	I0906 19:42:02.149468   48447 main.go:141] libmachine: (test-preload-767830) DBG | Getting to WaitForSSH function...
	I0906 19:42:02.151384   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.151632   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.151660   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.151735   48447 main.go:141] libmachine: (test-preload-767830) DBG | Using SSH client type: external
	I0906 19:42:02.151755   48447 main.go:141] libmachine: (test-preload-767830) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/test-preload-767830/id_rsa (-rw-------)
	I0906 19:42:02.151788   48447 main.go:141] libmachine: (test-preload-767830) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.40 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/test-preload-767830/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 19:42:02.151797   48447 main.go:141] libmachine: (test-preload-767830) DBG | About to run SSH command:
	I0906 19:42:02.151805   48447 main.go:141] libmachine: (test-preload-767830) DBG | exit 0
	I0906 19:42:02.276872   48447 main.go:141] libmachine: (test-preload-767830) DBG | SSH cmd err, output: <nil>: 
	I0906 19:42:02.277195   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetConfigRaw
	I0906 19:42:02.277828   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetIP
	I0906 19:42:02.279978   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.280295   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.280319   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.280578   48447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/config.json ...
	I0906 19:42:02.280752   48447 machine.go:93] provisionDockerMachine start ...
	I0906 19:42:02.280768   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:42:02.280985   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:02.282877   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.283141   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.283170   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.283319   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:02.283494   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:02.283625   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:02.283756   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:02.283883   48447 main.go:141] libmachine: Using SSH client type: native
	I0906 19:42:02.284092   48447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0906 19:42:02.284105   48447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 19:42:02.393117   48447 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 19:42:02.393149   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetMachineName
	I0906 19:42:02.393391   48447 buildroot.go:166] provisioning hostname "test-preload-767830"
	I0906 19:42:02.393437   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetMachineName
	I0906 19:42:02.393631   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:02.396029   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.396395   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.396430   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.396549   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:02.396721   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:02.396889   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:02.397038   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:02.397196   48447 main.go:141] libmachine: Using SSH client type: native
	I0906 19:42:02.397361   48447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0906 19:42:02.397374   48447 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-767830 && echo "test-preload-767830" | sudo tee /etc/hostname
	I0906 19:42:02.518742   48447 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-767830
	
	I0906 19:42:02.518780   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:02.521236   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.521568   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.521602   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.521703   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:02.521887   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:02.522053   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:02.522193   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:02.522347   48447 main.go:141] libmachine: Using SSH client type: native
	I0906 19:42:02.522542   48447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0906 19:42:02.522560   48447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-767830' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-767830/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-767830' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:42:02.643402   48447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:42:02.643427   48447 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:42:02.643463   48447 buildroot.go:174] setting up certificates
	I0906 19:42:02.643472   48447 provision.go:84] configureAuth start
	I0906 19:42:02.643480   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetMachineName
	I0906 19:42:02.643803   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetIP
	I0906 19:42:02.646096   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.646394   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.646424   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.646550   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:02.648426   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.648717   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.648740   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.648907   48447 provision.go:143] copyHostCerts
	I0906 19:42:02.648957   48447 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:42:02.648972   48447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:42:02.649068   48447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:42:02.649150   48447 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:42:02.649158   48447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:42:02.649182   48447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:42:02.649232   48447 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:42:02.649239   48447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:42:02.649260   48447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:42:02.649306   48447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.test-preload-767830 san=[127.0.0.1 192.168.39.40 localhost minikube test-preload-767830]
	I0906 19:42:02.789618   48447 provision.go:177] copyRemoteCerts
	I0906 19:42:02.789670   48447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:42:02.789692   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:02.792477   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.792806   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.792842   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.792984   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:02.793167   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:02.793312   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:02.793442   48447 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/test-preload-767830/id_rsa Username:docker}
	I0906 19:42:02.878233   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 19:42:02.901869   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 19:42:02.924936   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:42:02.947399   48447 provision.go:87] duration metric: took 303.914307ms to configureAuth
	I0906 19:42:02.947430   48447 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:42:02.947635   48447 config.go:182] Loaded profile config "test-preload-767830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0906 19:42:02.947721   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:02.950197   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.950529   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:02.950559   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:02.950667   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:02.950840   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:02.951000   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:02.951148   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:02.951297   48447 main.go:141] libmachine: Using SSH client type: native
	I0906 19:42:02.951476   48447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0906 19:42:02.951496   48447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:42:03.175330   48447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:42:03.175356   48447 machine.go:96] duration metric: took 894.593898ms to provisionDockerMachine
	I0906 19:42:03.175366   48447 start.go:293] postStartSetup for "test-preload-767830" (driver="kvm2")
	I0906 19:42:03.175376   48447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:42:03.175402   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:42:03.175726   48447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:42:03.175756   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:03.178153   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.178504   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:03.178534   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.178701   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:03.178857   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:03.179006   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:03.179131   48447 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/test-preload-767830/id_rsa Username:docker}
	I0906 19:42:03.262948   48447 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:42:03.267100   48447 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:42:03.267121   48447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:42:03.267176   48447 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:42:03.267243   48447 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:42:03.267325   48447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:42:03.275961   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:42:03.298957   48447 start.go:296] duration metric: took 123.580089ms for postStartSetup
	I0906 19:42:03.298992   48447 fix.go:56] duration metric: took 19.995987655s for fixHost
	I0906 19:42:03.299020   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:03.301300   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.301564   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:03.301594   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.301727   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:03.301901   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:03.302049   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:03.302219   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:03.302395   48447 main.go:141] libmachine: Using SSH client type: native
	I0906 19:42:03.302583   48447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.40 22 <nil> <nil>}
	I0906 19:42:03.302595   48447 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:42:03.413487   48447 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725651723.372271787
	
	I0906 19:42:03.413516   48447 fix.go:216] guest clock: 1725651723.372271787
	I0906 19:42:03.413528   48447 fix.go:229] Guest: 2024-09-06 19:42:03.372271787 +0000 UTC Remote: 2024-09-06 19:42:03.298997261 +0000 UTC m=+24.756250083 (delta=73.274526ms)
	I0906 19:42:03.413557   48447 fix.go:200] guest clock delta is within tolerance: 73.274526ms
	I0906 19:42:03.413567   48447 start.go:83] releasing machines lock for "test-preload-767830", held for 20.110572279s
	I0906 19:42:03.413609   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:42:03.413867   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetIP
	I0906 19:42:03.416707   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.417052   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:03.417079   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.417267   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:42:03.417702   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:42:03.417860   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:42:03.417936   48447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:42:03.417982   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:03.418078   48447 ssh_runner.go:195] Run: cat /version.json
	I0906 19:42:03.418103   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:03.420330   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.420566   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.420679   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:03.420704   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.420834   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:03.420979   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:03.421000   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:03.421006   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:03.421143   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:03.421195   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:03.421319   48447 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/test-preload-767830/id_rsa Username:docker}
	I0906 19:42:03.421383   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:03.421529   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:03.421675   48447 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/test-preload-767830/id_rsa Username:docker}
	I0906 19:42:03.524270   48447 ssh_runner.go:195] Run: systemctl --version
	I0906 19:42:03.530465   48447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:42:03.675119   48447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:42:03.681675   48447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:42:03.681729   48447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:42:03.697184   48447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 19:42:03.697204   48447 start.go:495] detecting cgroup driver to use...
	I0906 19:42:03.697253   48447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:42:03.713527   48447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:42:03.727290   48447 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:42:03.727345   48447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:42:03.740953   48447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:42:03.754874   48447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:42:03.865968   48447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:42:04.019159   48447 docker.go:233] disabling docker service ...
	I0906 19:42:04.019230   48447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:42:04.034006   48447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:42:04.047051   48447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:42:04.177366   48447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:42:04.319082   48447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:42:04.333689   48447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:42:04.351428   48447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0906 19:42:04.351473   48447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:42:04.361663   48447 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:42:04.361716   48447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:42:04.372245   48447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:42:04.383373   48447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:42:04.393970   48447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:42:04.404673   48447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:42:04.414915   48447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:42:04.431256   48447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:42:04.441440   48447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:42:04.451222   48447 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 19:42:04.451270   48447 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 19:42:04.464502   48447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:42:04.474156   48447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:42:04.601168   48447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:42:04.686899   48447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:42:04.686982   48447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:42:04.692142   48447 start.go:563] Will wait 60s for crictl version
	I0906 19:42:04.692192   48447 ssh_runner.go:195] Run: which crictl
	I0906 19:42:04.696117   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:42:04.737264   48447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:42:04.737336   48447 ssh_runner.go:195] Run: crio --version
	I0906 19:42:04.764971   48447 ssh_runner.go:195] Run: crio --version
	I0906 19:42:04.792876   48447 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0906 19:42:04.794149   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetIP
	I0906 19:42:04.796743   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:04.797055   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:04.797083   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:04.797254   48447 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 19:42:04.801268   48447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 19:42:04.813702   48447 kubeadm.go:883] updating cluster {Name:test-preload-767830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-767830 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:42:04.813798   48447 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0906 19:42:04.813837   48447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:42:04.847396   48447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0906 19:42:04.847463   48447 ssh_runner.go:195] Run: which lz4
	I0906 19:42:04.851547   48447 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 19:42:04.855606   48447 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 19:42:04.855636   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0906 19:42:06.346068   48447 crio.go:462] duration metric: took 1.494566374s to copy over tarball
	I0906 19:42:06.346138   48447 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 19:42:08.717273   48447 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.37109861s)
	I0906 19:42:08.717303   48447 crio.go:469] duration metric: took 2.371204789s to extract the tarball
	I0906 19:42:08.717312   48447 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 19:42:08.758589   48447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:42:08.808273   48447 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0906 19:42:08.808303   48447 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 19:42:08.808354   48447 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:42:08.808382   48447 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0906 19:42:08.808412   48447 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0906 19:42:08.808445   48447 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 19:42:08.808496   48447 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0906 19:42:08.808524   48447 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0906 19:42:08.808548   48447 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0906 19:42:08.808545   48447 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 19:42:08.809968   48447 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:42:08.809979   48447 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0906 19:42:08.809991   48447 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 19:42:08.810026   48447 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0906 19:42:08.809969   48447 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0906 19:42:08.809971   48447 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0906 19:42:08.809968   48447 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 19:42:08.809969   48447 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0906 19:42:08.973274   48447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 19:42:08.976778   48447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0906 19:42:08.982013   48447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0906 19:42:08.991945   48447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0906 19:42:08.994200   48447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0906 19:42:09.020648   48447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0906 19:42:09.066310   48447 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0906 19:42:09.066358   48447 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 19:42:09.066395   48447 ssh_runner.go:195] Run: which crictl
	I0906 19:42:09.078904   48447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0906 19:42:09.101224   48447 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0906 19:42:09.101265   48447 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0906 19:42:09.101321   48447 ssh_runner.go:195] Run: which crictl
	I0906 19:42:09.117051   48447 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0906 19:42:09.117089   48447 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0906 19:42:09.117135   48447 ssh_runner.go:195] Run: which crictl
	I0906 19:42:09.157165   48447 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0906 19:42:09.157271   48447 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0906 19:42:09.157344   48447 ssh_runner.go:195] Run: which crictl
	I0906 19:42:09.157364   48447 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0906 19:42:09.157397   48447 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0906 19:42:09.157432   48447 ssh_runner.go:195] Run: which crictl
	I0906 19:42:09.170162   48447 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0906 19:42:09.170199   48447 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0906 19:42:09.170240   48447 ssh_runner.go:195] Run: which crictl
	I0906 19:42:09.170246   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 19:42:09.173128   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0906 19:42:09.173203   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0906 19:42:09.173202   48447 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0906 19:42:09.173255   48447 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0906 19:42:09.173282   48447 ssh_runner.go:195] Run: which crictl
	I0906 19:42:09.173283   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0906 19:42:09.173224   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0906 19:42:09.264409   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 19:42:09.264466   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0906 19:42:09.283459   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0906 19:42:09.296064   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0906 19:42:09.296130   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0906 19:42:09.301733   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0906 19:42:09.301764   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0906 19:42:09.432204   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0906 19:42:09.432328   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0906 19:42:09.444066   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0906 19:42:09.444107   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0906 19:42:09.467636   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0906 19:42:09.467720   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0906 19:42:09.467784   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0906 19:42:09.560010   48447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0906 19:42:09.560117   48447 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0906 19:42:09.560117   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0906 19:42:09.591054   48447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0906 19:42:09.591096   48447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0906 19:42:09.591163   48447 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0906 19:42:09.591176   48447 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0906 19:42:09.595163   48447 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:42:09.628727   48447 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0906 19:42:09.630023   48447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0906 19:42:09.630098   48447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0906 19:42:09.630133   48447 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0906 19:42:09.630175   48447 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0906 19:42:09.686856   48447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0906 19:42:09.686883   48447 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0906 19:42:09.686896   48447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0906 19:42:09.686942   48447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0906 19:42:09.686983   48447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0906 19:42:09.689057   48447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0906 19:42:09.689136   48447 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0906 19:42:09.820844   48447 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0906 19:42:09.820870   48447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0906 19:42:09.820922   48447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0906 19:42:09.820960   48447 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0906 19:42:11.959006   48447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.272041624s)
	I0906 19:42:11.959035   48447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0906 19:42:11.959057   48447 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0906 19:42:11.959079   48447 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.269922629s)
	I0906 19:42:11.959101   48447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0906 19:42:11.959110   48447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0906 19:42:11.959155   48447 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.138180041s)
	I0906 19:42:11.959172   48447 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0906 19:42:12.300582   48447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0906 19:42:12.300619   48447 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0906 19:42:12.300658   48447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0906 19:42:13.045863   48447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0906 19:42:13.045908   48447 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0906 19:42:13.045970   48447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0906 19:42:13.495533   48447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0906 19:42:13.495597   48447 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0906 19:42:13.495641   48447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0906 19:42:15.747189   48447 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.251520727s)
	I0906 19:42:15.747227   48447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0906 19:42:15.747252   48447 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0906 19:42:15.747312   48447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0906 19:42:15.889139   48447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0906 19:42:15.889178   48447 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0906 19:42:15.889218   48447 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0906 19:42:16.744028   48447 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0906 19:42:16.744081   48447 cache_images.go:123] Successfully loaded all cached images
	I0906 19:42:16.744089   48447 cache_images.go:92] duration metric: took 7.935772883s to LoadCachedImages
	I0906 19:42:16.744102   48447 kubeadm.go:934] updating node { 192.168.39.40 8443 v1.24.4 crio true true} ...
	I0906 19:42:16.744232   48447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-767830 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-767830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:42:16.744313   48447 ssh_runner.go:195] Run: crio config
	I0906 19:42:16.788513   48447 cni.go:84] Creating CNI manager for ""
	I0906 19:42:16.788540   48447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:42:16.788558   48447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:42:16.788576   48447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.40 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-767830 NodeName:test-preload-767830 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.40"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.40 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:42:16.788709   48447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.40
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-767830"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.40
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.40"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:42:16.788768   48447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0906 19:42:16.799326   48447 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:42:16.799400   48447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 19:42:16.808964   48447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0906 19:42:16.824785   48447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:42:16.840428   48447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0906 19:42:16.856917   48447 ssh_runner.go:195] Run: grep 192.168.39.40	control-plane.minikube.internal$ /etc/hosts
	I0906 19:42:16.860629   48447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.40	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 19:42:16.872979   48447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:42:16.988976   48447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:42:17.011644   48447 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830 for IP: 192.168.39.40
	I0906 19:42:17.011667   48447 certs.go:194] generating shared ca certs ...
	I0906 19:42:17.011684   48447 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:42:17.011834   48447 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:42:17.011905   48447 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:42:17.011919   48447 certs.go:256] generating profile certs ...
	I0906 19:42:17.012048   48447 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/client.key
	I0906 19:42:17.012134   48447 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/apiserver.key.987ad68a
	I0906 19:42:17.012206   48447 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/proxy-client.key
	I0906 19:42:17.012337   48447 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:42:17.012377   48447 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:42:17.012386   48447 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:42:17.012416   48447 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:42:17.012447   48447 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:42:17.012478   48447 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:42:17.012550   48447 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:42:17.013371   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:42:17.062516   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:42:17.100680   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:42:17.150888   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:42:17.185341   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0906 19:42:17.224411   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 19:42:17.248007   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:42:17.272732   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 19:42:17.295730   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:42:17.318151   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:42:17.340935   48447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:42:17.363845   48447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:42:17.381036   48447 ssh_runner.go:195] Run: openssl version
	I0906 19:42:17.386895   48447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:42:17.398037   48447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:42:17.402424   48447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:42:17.402476   48447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:42:17.408050   48447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:42:17.418909   48447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:42:17.429305   48447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:42:17.433614   48447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:42:17.433663   48447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:42:17.439033   48447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:42:17.449400   48447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:42:17.459804   48447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:42:17.464024   48447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:42:17.464072   48447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:42:17.469662   48447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:42:17.480197   48447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:42:17.484706   48447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 19:42:17.490366   48447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 19:42:17.495980   48447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 19:42:17.501687   48447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 19:42:17.507151   48447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 19:42:17.512706   48447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 19:42:17.518313   48447 kubeadm.go:392] StartCluster: {Name:test-preload-767830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-767830 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:42:17.518415   48447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:42:17.518466   48447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:42:17.557077   48447 cri.go:89] found id: ""
	I0906 19:42:17.557173   48447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 19:42:17.567485   48447 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 19:42:17.567508   48447 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 19:42:17.567567   48447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 19:42:17.577145   48447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 19:42:17.577571   48447 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-767830" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:42:17.577701   48447 kubeconfig.go:62] /home/jenkins/minikube-integration/19576-6021/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-767830" cluster setting kubeconfig missing "test-preload-767830" context setting]
	I0906 19:42:17.577999   48447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:42:17.578687   48447 kapi.go:59] client config for test-preload-767830: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/client.crt", KeyFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/client.key", CAFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 19:42:17.579325   48447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 19:42:17.588706   48447 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.40
	I0906 19:42:17.588734   48447 kubeadm.go:1160] stopping kube-system containers ...
	I0906 19:42:17.588744   48447 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 19:42:17.588804   48447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:42:17.623101   48447 cri.go:89] found id: ""
	I0906 19:42:17.623175   48447 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 19:42:17.640755   48447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 19:42:17.650940   48447 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 19:42:17.650958   48447 kubeadm.go:157] found existing configuration files:
	
	I0906 19:42:17.651006   48447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 19:42:17.660681   48447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 19:42:17.660734   48447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 19:42:17.670301   48447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 19:42:17.679406   48447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 19:42:17.679457   48447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 19:42:17.688900   48447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 19:42:17.698089   48447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 19:42:17.698144   48447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 19:42:17.707580   48447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 19:42:17.716564   48447 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 19:42:17.716625   48447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 19:42:17.725952   48447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 19:42:17.735349   48447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 19:42:17.824272   48447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 19:42:18.193983   48447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 19:42:18.456829   48447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 19:42:18.521934   48447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 19:42:18.620823   48447 api_server.go:52] waiting for apiserver process to appear ...
	I0906 19:42:18.620921   48447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:42:19.121057   48447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:42:19.621870   48447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:42:19.652922   48447 api_server.go:72] duration metric: took 1.03209689s to wait for apiserver process to appear ...
	I0906 19:42:19.652957   48447 api_server.go:88] waiting for apiserver healthz status ...
	I0906 19:42:19.652986   48447 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I0906 19:42:19.653495   48447 api_server.go:269] stopped: https://192.168.39.40:8443/healthz: Get "https://192.168.39.40:8443/healthz": dial tcp 192.168.39.40:8443: connect: connection refused
	I0906 19:42:20.153079   48447 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I0906 19:42:24.015430   48447 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 19:42:24.015462   48447 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 19:42:24.015477   48447 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I0906 19:42:24.057560   48447 api_server.go:279] https://192.168.39.40:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 19:42:24.057596   48447 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 19:42:24.153837   48447 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I0906 19:42:24.172594   48447 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 19:42:24.172620   48447 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 19:42:24.654008   48447 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I0906 19:42:24.659723   48447 api_server.go:279] https://192.168.39.40:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 19:42:24.659748   48447 api_server.go:103] status: https://192.168.39.40:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 19:42:25.153334   48447 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I0906 19:42:25.158695   48447 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
	ok
	I0906 19:42:25.165499   48447 api_server.go:141] control plane version: v1.24.4
	I0906 19:42:25.165523   48447 api_server.go:131] duration metric: took 5.512558355s to wait for apiserver health ...
	I0906 19:42:25.165534   48447 cni.go:84] Creating CNI manager for ""
	I0906 19:42:25.165541   48447 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:42:25.167423   48447 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 19:42:25.168619   48447 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 19:42:25.179784   48447 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 19:42:25.207498   48447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 19:42:25.207620   48447 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0906 19:42:25.207646   48447 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0906 19:42:25.230569   48447 system_pods.go:59] 8 kube-system pods found
	I0906 19:42:25.230598   48447 system_pods.go:61] "coredns-6d4b75cb6d-fj2s4" [1e053de1-4bae-4c74-9f3d-b0dbeb917f0e] Running
	I0906 19:42:25.230602   48447 system_pods.go:61] "coredns-6d4b75cb6d-prmn7" [ddf29a0e-5c59-4846-b22a-ccc9f890f9b6] Running
	I0906 19:42:25.230611   48447 system_pods.go:61] "etcd-test-preload-767830" [186f73cb-e575-4c35-bcca-c734c195342d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 19:42:25.230617   48447 system_pods.go:61] "kube-apiserver-test-preload-767830" [5ff23da0-b03e-4b4c-9749-c4a191bd4abe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 19:42:25.230628   48447 system_pods.go:61] "kube-controller-manager-test-preload-767830" [7afbecde-2a63-4409-9b4b-e004cc75c51e] Running
	I0906 19:42:25.230635   48447 system_pods.go:61] "kube-proxy-qfgzq" [9e66f5fd-16d4-40c1-a952-2fe249eacf16] Running
	I0906 19:42:25.230638   48447 system_pods.go:61] "kube-scheduler-test-preload-767830" [a8457ff1-4e24-46bc-bdd9-b8573e82d204] Running
	I0906 19:42:25.230642   48447 system_pods.go:61] "storage-provisioner" [522a0f5c-02db-4714-b8ec-2280563356aa] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 19:42:25.230647   48447 system_pods.go:74] duration metric: took 23.125109ms to wait for pod list to return data ...
	I0906 19:42:25.230654   48447 node_conditions.go:102] verifying NodePressure condition ...
	I0906 19:42:25.239642   48447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 19:42:25.239667   48447 node_conditions.go:123] node cpu capacity is 2
	I0906 19:42:25.239677   48447 node_conditions.go:105] duration metric: took 9.01865ms to run NodePressure ...
	I0906 19:42:25.239692   48447 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 19:42:25.451496   48447 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 19:42:25.456041   48447 kubeadm.go:739] kubelet initialised
	I0906 19:42:25.456063   48447 kubeadm.go:740] duration metric: took 4.543796ms waiting for restarted kubelet to initialise ...
	I0906 19:42:25.456071   48447 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 19:42:25.464151   48447 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-fj2s4" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:25.476338   48447 pod_ready.go:98] node "test-preload-767830" hosting pod "coredns-6d4b75cb6d-fj2s4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:25.476371   48447 pod_ready.go:82] duration metric: took 12.195829ms for pod "coredns-6d4b75cb6d-fj2s4" in "kube-system" namespace to be "Ready" ...
	E0906 19:42:25.476383   48447 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-767830" hosting pod "coredns-6d4b75cb6d-fj2s4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:25.476393   48447 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-prmn7" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:25.483539   48447 pod_ready.go:98] node "test-preload-767830" hosting pod "coredns-6d4b75cb6d-prmn7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:25.483570   48447 pod_ready.go:82] duration metric: took 7.165197ms for pod "coredns-6d4b75cb6d-prmn7" in "kube-system" namespace to be "Ready" ...
	E0906 19:42:25.483580   48447 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-767830" hosting pod "coredns-6d4b75cb6d-prmn7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:25.483588   48447 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:25.490407   48447 pod_ready.go:98] node "test-preload-767830" hosting pod "etcd-test-preload-767830" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:25.490445   48447 pod_ready.go:82] duration metric: took 6.845317ms for pod "etcd-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	E0906 19:42:25.490455   48447 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-767830" hosting pod "etcd-test-preload-767830" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:25.490473   48447 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:25.611740   48447 pod_ready.go:98] node "test-preload-767830" hosting pod "kube-apiserver-test-preload-767830" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:25.611779   48447 pod_ready.go:82] duration metric: took 121.292327ms for pod "kube-apiserver-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	E0906 19:42:25.611793   48447 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-767830" hosting pod "kube-apiserver-test-preload-767830" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:25.611803   48447 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:26.011786   48447 pod_ready.go:98] node "test-preload-767830" hosting pod "kube-controller-manager-test-preload-767830" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:26.011814   48447 pod_ready.go:82] duration metric: took 399.99675ms for pod "kube-controller-manager-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	E0906 19:42:26.011828   48447 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-767830" hosting pod "kube-controller-manager-test-preload-767830" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:26.011835   48447 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-qfgzq" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:26.411888   48447 pod_ready.go:98] node "test-preload-767830" hosting pod "kube-proxy-qfgzq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:26.411921   48447 pod_ready.go:82] duration metric: took 400.077727ms for pod "kube-proxy-qfgzq" in "kube-system" namespace to be "Ready" ...
	E0906 19:42:26.411931   48447 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-767830" hosting pod "kube-proxy-qfgzq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:26.411939   48447 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:26.810500   48447 pod_ready.go:98] node "test-preload-767830" hosting pod "kube-scheduler-test-preload-767830" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:26.810534   48447 pod_ready.go:82] duration metric: took 398.588182ms for pod "kube-scheduler-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	E0906 19:42:26.810544   48447 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-767830" hosting pod "kube-scheduler-test-preload-767830" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:26.810551   48447 pod_ready.go:39] duration metric: took 1.354471789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 19:42:26.810568   48447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 19:42:26.825545   48447 ops.go:34] apiserver oom_adj: -16
	I0906 19:42:26.825562   48447 kubeadm.go:597] duration metric: took 9.258049242s to restartPrimaryControlPlane
	I0906 19:42:26.825570   48447 kubeadm.go:394] duration metric: took 9.307263439s to StartCluster
	I0906 19:42:26.825584   48447 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:42:26.825645   48447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:42:26.826208   48447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:42:26.826419   48447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.40 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 19:42:26.826503   48447 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 19:42:26.826584   48447 addons.go:69] Setting storage-provisioner=true in profile "test-preload-767830"
	I0906 19:42:26.826598   48447 addons.go:69] Setting default-storageclass=true in profile "test-preload-767830"
	I0906 19:42:26.826620   48447 addons.go:234] Setting addon storage-provisioner=true in "test-preload-767830"
	W0906 19:42:26.826632   48447 addons.go:243] addon storage-provisioner should already be in state true
	I0906 19:42:26.826633   48447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-767830"
	I0906 19:42:26.826664   48447 host.go:66] Checking if "test-preload-767830" exists ...
	I0906 19:42:26.826692   48447 config.go:182] Loaded profile config "test-preload-767830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0906 19:42:26.826977   48447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:42:26.827025   48447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:42:26.827062   48447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:42:26.827171   48447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:42:26.828209   48447 out.go:177] * Verifying Kubernetes components...
	I0906 19:42:26.829429   48447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:42:26.844738   48447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0906 19:42:26.845018   48447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43333
	I0906 19:42:26.845229   48447 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:42:26.845368   48447 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:42:26.845768   48447 main.go:141] libmachine: Using API Version  1
	I0906 19:42:26.845795   48447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:42:26.845770   48447 main.go:141] libmachine: Using API Version  1
	I0906 19:42:26.845831   48447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:42:26.846104   48447 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:42:26.846149   48447 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:42:26.846323   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetState
	I0906 19:42:26.846705   48447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:42:26.846753   48447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:42:26.848494   48447 kapi.go:59] client config for test-preload-767830: &rest.Config{Host:"https://192.168.39.40:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/client.crt", KeyFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/profiles/test-preload-767830/client.key", CAFile:"/home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0906 19:42:26.848712   48447 addons.go:234] Setting addon default-storageclass=true in "test-preload-767830"
	W0906 19:42:26.848725   48447 addons.go:243] addon default-storageclass should already be in state true
	I0906 19:42:26.848745   48447 host.go:66] Checking if "test-preload-767830" exists ...
	I0906 19:42:26.849029   48447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:42:26.849064   48447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:42:26.862096   48447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I0906 19:42:26.862650   48447 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:42:26.863092   48447 main.go:141] libmachine: Using API Version  1
	I0906 19:42:26.863111   48447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:42:26.863402   48447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I0906 19:42:26.863413   48447 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:42:26.863602   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetState
	I0906 19:42:26.863711   48447 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:42:26.864119   48447 main.go:141] libmachine: Using API Version  1
	I0906 19:42:26.864142   48447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:42:26.864450   48447 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:42:26.865087   48447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:42:26.865137   48447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:42:26.865176   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:42:26.867180   48447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:42:26.868345   48447 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 19:42:26.868360   48447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 19:42:26.868374   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:26.871108   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:26.871514   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:26.871541   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:26.871689   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:26.871860   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:26.872008   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:26.872170   48447 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/test-preload-767830/id_rsa Username:docker}
	I0906 19:42:26.880938   48447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0906 19:42:26.881312   48447 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:42:26.881796   48447 main.go:141] libmachine: Using API Version  1
	I0906 19:42:26.881818   48447 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:42:26.882184   48447 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:42:26.882380   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetState
	I0906 19:42:26.883893   48447 main.go:141] libmachine: (test-preload-767830) Calling .DriverName
	I0906 19:42:26.884124   48447 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 19:42:26.884140   48447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 19:42:26.884159   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHHostname
	I0906 19:42:26.886334   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:26.886690   48447 main.go:141] libmachine: (test-preload-767830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:af:41", ip: ""} in network mk-test-preload-767830: {Iface:virbr1 ExpiryTime:2024-09-06 20:40:12 +0000 UTC Type:0 Mac:52:54:00:ae:af:41 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:test-preload-767830 Clientid:01:52:54:00:ae:af:41}
	I0906 19:42:26.886717   48447 main.go:141] libmachine: (test-preload-767830) DBG | domain test-preload-767830 has defined IP address 192.168.39.40 and MAC address 52:54:00:ae:af:41 in network mk-test-preload-767830
	I0906 19:42:26.886941   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHPort
	I0906 19:42:26.887105   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHKeyPath
	I0906 19:42:26.887252   48447 main.go:141] libmachine: (test-preload-767830) Calling .GetSSHUsername
	I0906 19:42:26.887396   48447 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/test-preload-767830/id_rsa Username:docker}
	I0906 19:42:27.012826   48447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:42:27.032915   48447 node_ready.go:35] waiting up to 6m0s for node "test-preload-767830" to be "Ready" ...
	I0906 19:42:27.125015   48447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 19:42:27.129929   48447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 19:42:28.121383   48447 main.go:141] libmachine: Making call to close driver server
	I0906 19:42:28.121408   48447 main.go:141] libmachine: (test-preload-767830) Calling .Close
	I0906 19:42:28.121716   48447 main.go:141] libmachine: Successfully made call to close driver server
	I0906 19:42:28.121746   48447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 19:42:28.121763   48447 main.go:141] libmachine: Making call to close driver server
	I0906 19:42:28.121773   48447 main.go:141] libmachine: (test-preload-767830) Calling .Close
	I0906 19:42:28.122033   48447 main.go:141] libmachine: (test-preload-767830) DBG | Closing plugin on server side
	I0906 19:42:28.122067   48447 main.go:141] libmachine: Successfully made call to close driver server
	I0906 19:42:28.122085   48447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 19:42:28.129783   48447 main.go:141] libmachine: Making call to close driver server
	I0906 19:42:28.129806   48447 main.go:141] libmachine: (test-preload-767830) Calling .Close
	I0906 19:42:28.130036   48447 main.go:141] libmachine: Successfully made call to close driver server
	I0906 19:42:28.130052   48447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 19:42:28.168044   48447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.038072417s)
	I0906 19:42:28.168078   48447 main.go:141] libmachine: Making call to close driver server
	I0906 19:42:28.168086   48447 main.go:141] libmachine: (test-preload-767830) Calling .Close
	I0906 19:42:28.168336   48447 main.go:141] libmachine: Successfully made call to close driver server
	I0906 19:42:28.168366   48447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 19:42:28.168366   48447 main.go:141] libmachine: (test-preload-767830) DBG | Closing plugin on server side
	I0906 19:42:28.168380   48447 main.go:141] libmachine: Making call to close driver server
	I0906 19:42:28.168388   48447 main.go:141] libmachine: (test-preload-767830) Calling .Close
	I0906 19:42:28.168618   48447 main.go:141] libmachine: Successfully made call to close driver server
	I0906 19:42:28.168635   48447 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 19:42:28.170418   48447 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0906 19:42:28.171472   48447 addons.go:510] duration metric: took 1.344976713s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0906 19:42:29.036442   48447 node_ready.go:53] node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:31.038180   48447 node_ready.go:53] node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:33.536710   48447 node_ready.go:53] node "test-preload-767830" has status "Ready":"False"
	I0906 19:42:34.537778   48447 node_ready.go:49] node "test-preload-767830" has status "Ready":"True"
	I0906 19:42:34.537811   48447 node_ready.go:38] duration metric: took 7.504864326s for node "test-preload-767830" to be "Ready" ...
	I0906 19:42:34.537820   48447 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 19:42:34.542602   48447 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-fj2s4" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:34.547528   48447 pod_ready.go:93] pod "coredns-6d4b75cb6d-fj2s4" in "kube-system" namespace has status "Ready":"True"
	I0906 19:42:34.547553   48447 pod_ready.go:82] duration metric: took 4.925813ms for pod "coredns-6d4b75cb6d-fj2s4" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:34.547562   48447 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:34.551566   48447 pod_ready.go:93] pod "etcd-test-preload-767830" in "kube-system" namespace has status "Ready":"True"
	I0906 19:42:34.551587   48447 pod_ready.go:82] duration metric: took 4.018596ms for pod "etcd-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:34.551597   48447 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:35.557194   48447 pod_ready.go:93] pod "kube-apiserver-test-preload-767830" in "kube-system" namespace has status "Ready":"True"
	I0906 19:42:35.557217   48447 pod_ready.go:82] duration metric: took 1.005611837s for pod "kube-apiserver-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:35.557226   48447 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:35.561692   48447 pod_ready.go:93] pod "kube-controller-manager-test-preload-767830" in "kube-system" namespace has status "Ready":"True"
	I0906 19:42:35.561709   48447 pod_ready.go:82] duration metric: took 4.477159ms for pod "kube-controller-manager-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:35.561716   48447 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qfgzq" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:35.737853   48447 pod_ready.go:93] pod "kube-proxy-qfgzq" in "kube-system" namespace has status "Ready":"True"
	I0906 19:42:35.737878   48447 pod_ready.go:82] duration metric: took 176.155828ms for pod "kube-proxy-qfgzq" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:35.737887   48447 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:36.138298   48447 pod_ready.go:93] pod "kube-scheduler-test-preload-767830" in "kube-system" namespace has status "Ready":"True"
	I0906 19:42:36.138322   48447 pod_ready.go:82] duration metric: took 400.429138ms for pod "kube-scheduler-test-preload-767830" in "kube-system" namespace to be "Ready" ...
	I0906 19:42:36.138332   48447 pod_ready.go:39] duration metric: took 1.600502367s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 19:42:36.138356   48447 api_server.go:52] waiting for apiserver process to appear ...
	I0906 19:42:36.138406   48447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:42:36.153079   48447 api_server.go:72] duration metric: took 9.326632327s to wait for apiserver process to appear ...
	I0906 19:42:36.153105   48447 api_server.go:88] waiting for apiserver healthz status ...
	I0906 19:42:36.153127   48447 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8443/healthz ...
	I0906 19:42:36.158138   48447 api_server.go:279] https://192.168.39.40:8443/healthz returned 200:
	ok
	I0906 19:42:36.159065   48447 api_server.go:141] control plane version: v1.24.4
	I0906 19:42:36.159081   48447 api_server.go:131] duration metric: took 5.970843ms to wait for apiserver health ...
	I0906 19:42:36.159089   48447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 19:42:36.341824   48447 system_pods.go:59] 7 kube-system pods found
	I0906 19:42:36.341849   48447 system_pods.go:61] "coredns-6d4b75cb6d-fj2s4" [1e053de1-4bae-4c74-9f3d-b0dbeb917f0e] Running
	I0906 19:42:36.341860   48447 system_pods.go:61] "etcd-test-preload-767830" [186f73cb-e575-4c35-bcca-c734c195342d] Running
	I0906 19:42:36.341866   48447 system_pods.go:61] "kube-apiserver-test-preload-767830" [5ff23da0-b03e-4b4c-9749-c4a191bd4abe] Running
	I0906 19:42:36.341869   48447 system_pods.go:61] "kube-controller-manager-test-preload-767830" [7afbecde-2a63-4409-9b4b-e004cc75c51e] Running
	I0906 19:42:36.341872   48447 system_pods.go:61] "kube-proxy-qfgzq" [9e66f5fd-16d4-40c1-a952-2fe249eacf16] Running
	I0906 19:42:36.341875   48447 system_pods.go:61] "kube-scheduler-test-preload-767830" [a8457ff1-4e24-46bc-bdd9-b8573e82d204] Running
	I0906 19:42:36.341878   48447 system_pods.go:61] "storage-provisioner" [522a0f5c-02db-4714-b8ec-2280563356aa] Running
	I0906 19:42:36.341883   48447 system_pods.go:74] duration metric: took 182.789189ms to wait for pod list to return data ...
	I0906 19:42:36.341891   48447 default_sa.go:34] waiting for default service account to be created ...
	I0906 19:42:36.537960   48447 default_sa.go:45] found service account: "default"
	I0906 19:42:36.538005   48447 default_sa.go:55] duration metric: took 196.105468ms for default service account to be created ...
	I0906 19:42:36.538019   48447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 19:42:36.739840   48447 system_pods.go:86] 7 kube-system pods found
	I0906 19:42:36.739866   48447 system_pods.go:89] "coredns-6d4b75cb6d-fj2s4" [1e053de1-4bae-4c74-9f3d-b0dbeb917f0e] Running
	I0906 19:42:36.739876   48447 system_pods.go:89] "etcd-test-preload-767830" [186f73cb-e575-4c35-bcca-c734c195342d] Running
	I0906 19:42:36.739879   48447 system_pods.go:89] "kube-apiserver-test-preload-767830" [5ff23da0-b03e-4b4c-9749-c4a191bd4abe] Running
	I0906 19:42:36.739883   48447 system_pods.go:89] "kube-controller-manager-test-preload-767830" [7afbecde-2a63-4409-9b4b-e004cc75c51e] Running
	I0906 19:42:36.739891   48447 system_pods.go:89] "kube-proxy-qfgzq" [9e66f5fd-16d4-40c1-a952-2fe249eacf16] Running
	I0906 19:42:36.739895   48447 system_pods.go:89] "kube-scheduler-test-preload-767830" [a8457ff1-4e24-46bc-bdd9-b8573e82d204] Running
	I0906 19:42:36.739900   48447 system_pods.go:89] "storage-provisioner" [522a0f5c-02db-4714-b8ec-2280563356aa] Running
	I0906 19:42:36.739907   48447 system_pods.go:126] duration metric: took 201.880702ms to wait for k8s-apps to be running ...
	I0906 19:42:36.739913   48447 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 19:42:36.739957   48447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:42:36.754599   48447 system_svc.go:56] duration metric: took 14.671389ms WaitForService to wait for kubelet
	I0906 19:42:36.754632   48447 kubeadm.go:582] duration metric: took 9.928188636s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:42:36.754657   48447 node_conditions.go:102] verifying NodePressure condition ...
	I0906 19:42:36.937387   48447 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 19:42:36.937420   48447 node_conditions.go:123] node cpu capacity is 2
	I0906 19:42:36.937433   48447 node_conditions.go:105] duration metric: took 182.770317ms to run NodePressure ...
	I0906 19:42:36.937449   48447 start.go:241] waiting for startup goroutines ...
	I0906 19:42:36.937460   48447 start.go:246] waiting for cluster config update ...
	I0906 19:42:36.937476   48447 start.go:255] writing updated cluster config ...
	I0906 19:42:36.937818   48447 ssh_runner.go:195] Run: rm -f paused
	I0906 19:42:37.000480   48447 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0906 19:42:37.002773   48447 out.go:201] 
	W0906 19:42:37.004022   48447 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0906 19:42:37.005206   48447 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0906 19:42:37.006538   48447 out.go:177] * Done! kubectl is now configured to use "test-preload-767830" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.876701320Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651757876678341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5cae44b-969a-47e0-aa1c-57b839eea798 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.877287830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aaaccb9f-9122-4a77-89ee-51a22067af89 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.877381543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aaaccb9f-9122-4a77-89ee-51a22067af89 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.877617917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa13c468357f99a5ab241c70b24a0afb8d0615a9b70ebcc633c9789e9bef420,PodSandboxId:fe7f7892dc902978be168b53074ea19d42cc812ec7e5b8a1ad15c6f0042d2f0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1725651752790927031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fj2s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e053de1-4bae-4c74-9f3d-b0dbeb917f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6e865a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f27f77b0065ca361dd94c1bead3f0c103e3817ad5af95044561644ea86ee322c,PodSandboxId:8c046ae647d3c1829704eae9fe916be1560c44b56a079fd2d2b39e4a54bbf14d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1725651745555671909,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qfgzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9e66f5fd-16d4-40c1-a952-2fe249eacf16,},Annotations:map[string]string{io.kubernetes.container.hash: 5b95d5f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656dd77caeac489777fc82449c971973b4379a962732f1e868e0b7d83d60da3e,PodSandboxId:ef50d57d25e4e39414bd4e39f977dc9bbdf70d3575de30039bb385a9bb1b279b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651745333509376,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52
2a0f5c-02db-4714-b8ec-2280563356aa,},Annotations:map[string]string{io.kubernetes.container.hash: 69f800e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:318b57bbe4432b6240e678716e151bfdb62fed44b0334ffea8c01e5ff8b32857,PodSandboxId:1dbfc261ebbb9b96c86b0e93dd58d1c81528a2b24f8f547a6b432d255c46bcdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1725651739408994731,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c9e5cf0a
f08e622fc5b29f7a11e8c9f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d0af5b8a93eb0f3f5a12eebf8ce92a0aeb2d40c5114c3ac4432e6a588be830,PodSandboxId:505ad3a840015000daf434f5cb3afc942146c61a90b5ce82b0146d014b8cc7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1725651739318294629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc4c92e62ec9178e8cfb048648a223a,},Annotations:map
[string]string{io.kubernetes.container.hash: b01ab1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab831b961cb9bcb3296dee5c64ea43afa9c415a329b7769d269c25dd9c09406,PodSandboxId:8666f8bb0a32edfcd74f27f463fb5e77b724a87b2089a5b400db05ab95bbae42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1725651739308848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688f43a305abf153e00c27bbac621ff8,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc995296c64111ee3c9e3eb29c3a7992b4da499223ee4a130868fd6cf8a4eea,PodSandboxId:560de2cb290fbbab1bc61b07c288555c9f0d92a79f53e6973f71cf42a775b5e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1725651739264756697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372c86eb28d4777008eabe21155b8ab,},Annotations
:map[string]string{io.kubernetes.container.hash: 7431651b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aaaccb9f-9122-4a77-89ee-51a22067af89 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.919612922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1639fc38-99b5-4dc4-bc32-6d305953b8e0 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.919704491Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1639fc38-99b5-4dc4-bc32-6d305953b8e0 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.927762496Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57ff5b96-0717-41b7-9a0e-fa0b70e52736 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.928480149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651757928284109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57ff5b96-0717-41b7-9a0e-fa0b70e52736 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.929539635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91a3005c-6a83-4c01-aa61-1b7cc4e8bf12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.929589500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91a3005c-6a83-4c01-aa61-1b7cc4e8bf12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.929883501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa13c468357f99a5ab241c70b24a0afb8d0615a9b70ebcc633c9789e9bef420,PodSandboxId:fe7f7892dc902978be168b53074ea19d42cc812ec7e5b8a1ad15c6f0042d2f0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1725651752790927031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fj2s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e053de1-4bae-4c74-9f3d-b0dbeb917f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6e865a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f27f77b0065ca361dd94c1bead3f0c103e3817ad5af95044561644ea86ee322c,PodSandboxId:8c046ae647d3c1829704eae9fe916be1560c44b56a079fd2d2b39e4a54bbf14d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1725651745555671909,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qfgzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9e66f5fd-16d4-40c1-a952-2fe249eacf16,},Annotations:map[string]string{io.kubernetes.container.hash: 5b95d5f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656dd77caeac489777fc82449c971973b4379a962732f1e868e0b7d83d60da3e,PodSandboxId:ef50d57d25e4e39414bd4e39f977dc9bbdf70d3575de30039bb385a9bb1b279b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651745333509376,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52
2a0f5c-02db-4714-b8ec-2280563356aa,},Annotations:map[string]string{io.kubernetes.container.hash: 69f800e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:318b57bbe4432b6240e678716e151bfdb62fed44b0334ffea8c01e5ff8b32857,PodSandboxId:1dbfc261ebbb9b96c86b0e93dd58d1c81528a2b24f8f547a6b432d255c46bcdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1725651739408994731,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c9e5cf0a
f08e622fc5b29f7a11e8c9f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d0af5b8a93eb0f3f5a12eebf8ce92a0aeb2d40c5114c3ac4432e6a588be830,PodSandboxId:505ad3a840015000daf434f5cb3afc942146c61a90b5ce82b0146d014b8cc7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1725651739318294629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc4c92e62ec9178e8cfb048648a223a,},Annotations:map
[string]string{io.kubernetes.container.hash: b01ab1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab831b961cb9bcb3296dee5c64ea43afa9c415a329b7769d269c25dd9c09406,PodSandboxId:8666f8bb0a32edfcd74f27f463fb5e77b724a87b2089a5b400db05ab95bbae42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1725651739308848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688f43a305abf153e00c27bbac621ff8,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc995296c64111ee3c9e3eb29c3a7992b4da499223ee4a130868fd6cf8a4eea,PodSandboxId:560de2cb290fbbab1bc61b07c288555c9f0d92a79f53e6973f71cf42a775b5e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1725651739264756697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372c86eb28d4777008eabe21155b8ab,},Annotations
:map[string]string{io.kubernetes.container.hash: 7431651b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91a3005c-6a83-4c01-aa61-1b7cc4e8bf12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.967733840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb02f808-a2f2-452d-aa5e-7fd130837b2c name=/runtime.v1.RuntimeService/Version
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.967806697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb02f808-a2f2-452d-aa5e-7fd130837b2c name=/runtime.v1.RuntimeService/Version
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.969480102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a60bc60b-bf92-4f61-88d8-48d1dd5b09ee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.969922352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651757969898240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a60bc60b-bf92-4f61-88d8-48d1dd5b09ee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.970536813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6481f19-1507-4d35-a783-765568e491a9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.970590753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6481f19-1507-4d35-a783-765568e491a9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:37 test-preload-767830 crio[679]: time="2024-09-06 19:42:37.970741267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa13c468357f99a5ab241c70b24a0afb8d0615a9b70ebcc633c9789e9bef420,PodSandboxId:fe7f7892dc902978be168b53074ea19d42cc812ec7e5b8a1ad15c6f0042d2f0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1725651752790927031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fj2s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e053de1-4bae-4c74-9f3d-b0dbeb917f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6e865a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f27f77b0065ca361dd94c1bead3f0c103e3817ad5af95044561644ea86ee322c,PodSandboxId:8c046ae647d3c1829704eae9fe916be1560c44b56a079fd2d2b39e4a54bbf14d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1725651745555671909,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qfgzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9e66f5fd-16d4-40c1-a952-2fe249eacf16,},Annotations:map[string]string{io.kubernetes.container.hash: 5b95d5f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656dd77caeac489777fc82449c971973b4379a962732f1e868e0b7d83d60da3e,PodSandboxId:ef50d57d25e4e39414bd4e39f977dc9bbdf70d3575de30039bb385a9bb1b279b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651745333509376,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52
2a0f5c-02db-4714-b8ec-2280563356aa,},Annotations:map[string]string{io.kubernetes.container.hash: 69f800e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:318b57bbe4432b6240e678716e151bfdb62fed44b0334ffea8c01e5ff8b32857,PodSandboxId:1dbfc261ebbb9b96c86b0e93dd58d1c81528a2b24f8f547a6b432d255c46bcdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1725651739408994731,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c9e5cf0a
f08e622fc5b29f7a11e8c9f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d0af5b8a93eb0f3f5a12eebf8ce92a0aeb2d40c5114c3ac4432e6a588be830,PodSandboxId:505ad3a840015000daf434f5cb3afc942146c61a90b5ce82b0146d014b8cc7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1725651739318294629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc4c92e62ec9178e8cfb048648a223a,},Annotations:map
[string]string{io.kubernetes.container.hash: b01ab1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab831b961cb9bcb3296dee5c64ea43afa9c415a329b7769d269c25dd9c09406,PodSandboxId:8666f8bb0a32edfcd74f27f463fb5e77b724a87b2089a5b400db05ab95bbae42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1725651739308848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688f43a305abf153e00c27bbac621ff8,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc995296c64111ee3c9e3eb29c3a7992b4da499223ee4a130868fd6cf8a4eea,PodSandboxId:560de2cb290fbbab1bc61b07c288555c9f0d92a79f53e6973f71cf42a775b5e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1725651739264756697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372c86eb28d4777008eabe21155b8ab,},Annotations
:map[string]string{io.kubernetes.container.hash: 7431651b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6481f19-1507-4d35-a783-765568e491a9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:38 test-preload-767830 crio[679]: time="2024-09-06 19:42:38.006563277Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1875498a-dd31-4d74-b9a3-9474e022914a name=/runtime.v1.RuntimeService/Version
	Sep 06 19:42:38 test-preload-767830 crio[679]: time="2024-09-06 19:42:38.006649454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1875498a-dd31-4d74-b9a3-9474e022914a name=/runtime.v1.RuntimeService/Version
	Sep 06 19:42:38 test-preload-767830 crio[679]: time="2024-09-06 19:42:38.007849122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b43ec96-4484-429c-a2a4-77817c84afe0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:42:38 test-preload-767830 crio[679]: time="2024-09-06 19:42:38.008287658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725651758008266842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b43ec96-4484-429c-a2a4-77817c84afe0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:42:38 test-preload-767830 crio[679]: time="2024-09-06 19:42:38.008958442Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d4e6bfc-9928-4606-ae6c-de8bb900a417 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:38 test-preload-767830 crio[679]: time="2024-09-06 19:42:38.009010093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d4e6bfc-9928-4606-ae6c-de8bb900a417 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:42:38 test-preload-767830 crio[679]: time="2024-09-06 19:42:38.009169587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0aa13c468357f99a5ab241c70b24a0afb8d0615a9b70ebcc633c9789e9bef420,PodSandboxId:fe7f7892dc902978be168b53074ea19d42cc812ec7e5b8a1ad15c6f0042d2f0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1725651752790927031,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-fj2s4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e053de1-4bae-4c74-9f3d-b0dbeb917f0e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f6e865a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f27f77b0065ca361dd94c1bead3f0c103e3817ad5af95044561644ea86ee322c,PodSandboxId:8c046ae647d3c1829704eae9fe916be1560c44b56a079fd2d2b39e4a54bbf14d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1725651745555671909,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qfgzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9e66f5fd-16d4-40c1-a952-2fe249eacf16,},Annotations:map[string]string{io.kubernetes.container.hash: 5b95d5f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656dd77caeac489777fc82449c971973b4379a962732f1e868e0b7d83d60da3e,PodSandboxId:ef50d57d25e4e39414bd4e39f977dc9bbdf70d3575de30039bb385a9bb1b279b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725651745333509376,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52
2a0f5c-02db-4714-b8ec-2280563356aa,},Annotations:map[string]string{io.kubernetes.container.hash: 69f800e0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:318b57bbe4432b6240e678716e151bfdb62fed44b0334ffea8c01e5ff8b32857,PodSandboxId:1dbfc261ebbb9b96c86b0e93dd58d1c81528a2b24f8f547a6b432d255c46bcdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1725651739408994731,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c9e5cf0a
f08e622fc5b29f7a11e8c9f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d0af5b8a93eb0f3f5a12eebf8ce92a0aeb2d40c5114c3ac4432e6a588be830,PodSandboxId:505ad3a840015000daf434f5cb3afc942146c61a90b5ce82b0146d014b8cc7c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1725651739318294629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afc4c92e62ec9178e8cfb048648a223a,},Annotations:map
[string]string{io.kubernetes.container.hash: b01ab1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aab831b961cb9bcb3296dee5c64ea43afa9c415a329b7769d269c25dd9c09406,PodSandboxId:8666f8bb0a32edfcd74f27f463fb5e77b724a87b2089a5b400db05ab95bbae42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1725651739308848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 688f43a305abf153e00c27bbac621ff8,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efc995296c64111ee3c9e3eb29c3a7992b4da499223ee4a130868fd6cf8a4eea,PodSandboxId:560de2cb290fbbab1bc61b07c288555c9f0d92a79f53e6973f71cf42a775b5e4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1725651739264756697,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-767830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7372c86eb28d4777008eabe21155b8ab,},Annotations
:map[string]string{io.kubernetes.container.hash: 7431651b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d4e6bfc-9928-4606-ae6c-de8bb900a417 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0aa13c468357f       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   fe7f7892dc902       coredns-6d4b75cb6d-fj2s4
	f27f77b0065ca       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   8c046ae647d3c       kube-proxy-qfgzq
	656dd77caeac4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   ef50d57d25e4e       storage-provisioner
	318b57bbe4432       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   1dbfc261ebbb9       kube-scheduler-test-preload-767830
	a5d0af5b8a93e       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   505ad3a840015       etcd-test-preload-767830
	aab831b961cb9       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   8666f8bb0a32e       kube-controller-manager-test-preload-767830
	efc995296c641       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   560de2cb290fb       kube-apiserver-test-preload-767830
	
	
	==> coredns [0aa13c468357f99a5ab241c70b24a0afb8d0615a9b70ebcc633c9789e9bef420] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47874 - 53895 "HINFO IN 7187292938283091743.112063917252873463. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010659388s
	
	
	==> describe nodes <==
	Name:               test-preload-767830
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-767830
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=test-preload-767830
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T19_41_10_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:41:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-767830
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:42:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:42:34 +0000   Fri, 06 Sep 2024 19:41:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:42:34 +0000   Fri, 06 Sep 2024 19:41:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:42:34 +0000   Fri, 06 Sep 2024 19:41:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:42:34 +0000   Fri, 06 Sep 2024 19:42:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    test-preload-767830
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06abf94af24c4be986bf11e7b2bab16b
	  System UUID:                06abf94a-f24c-4be9-86bf-11e7b2bab16b
	  Boot ID:                    8dd4e182-5dee-414b-b1e8-8390c1f1176e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fj2s4                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     75s
	  kube-system                 etcd-test-preload-767830                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         89s
	  kube-system                 kube-apiserver-test-preload-767830             250m (12%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-controller-manager-test-preload-767830    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-qfgzq                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-test-preload-767830             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 73s                kube-proxy       
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 95s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s (x4 over 95s)  kubelet          Node test-preload-767830 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     95s (x3 over 95s)  kubelet          Node test-preload-767830 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    95s (x4 over 95s)  kubelet          Node test-preload-767830 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                kubelet          Node test-preload-767830 status is now: NodeHasSufficientPID
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s                kubelet          Node test-preload-767830 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                kubelet          Node test-preload-767830 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                77s                kubelet          Node test-preload-767830 status is now: NodeReady
	  Normal  RegisteredNode           76s                node-controller  Node test-preload-767830 event: Registered Node test-preload-767830 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-767830 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-767830 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-767830 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-767830 event: Registered Node test-preload-767830 in Controller
	
	
	==> dmesg <==
	[Sep 6 19:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050251] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040062] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752091] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.493626] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.544168] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 19:42] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.055866] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052626] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.198320] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.139050] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.281720] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[ +12.392506] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[  +0.059199] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.394535] systemd-fstab-generator[1130]: Ignoring "noauto" option for root device
	[  +6.416890] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.121195] systemd-fstab-generator[1763]: Ignoring "noauto" option for root device
	[  +5.707327] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [a5d0af5b8a93eb0f3f5a12eebf8ce92a0aeb2d40c5114c3ac4432e6a588be830] <==
	{"level":"info","ts":"2024-09-06T19:42:19.850Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"1088a855a4aa8d0a","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-06T19:42:19.853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a switched to configuration voters=(1191387187227823370)"}
	{"level":"info","ts":"2024-09-06T19:42:19.857Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-06T19:42:19.865Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ca485a4cd00ef8c5","local-member-id":"1088a855a4aa8d0a","added-peer-id":"1088a855a4aa8d0a","added-peer-peer-urls":["https://192.168.39.40:2380"]}
	{"level":"info","ts":"2024-09-06T19:42:19.865Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ca485a4cd00ef8c5","local-member-id":"1088a855a4aa8d0a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:42:19.865Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:42:19.871Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T19:42:19.873Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"info","ts":"2024-09-06T19:42:19.873Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.40:2380"}
	{"level":"info","ts":"2024-09-06T19:42:19.874Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"1088a855a4aa8d0a","initial-advertise-peer-urls":["https://192.168.39.40:2380"],"listen-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.40:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T19:42:19.874Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T19:42:21.510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:42:21.510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:42:21.510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgPreVoteResp from 1088a855a4aa8d0a at term 2"}
	{"level":"info","ts":"2024-09-06T19:42:21.510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:42:21.510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a received MsgVoteResp from 1088a855a4aa8d0a at term 3"}
	{"level":"info","ts":"2024-09-06T19:42:21.510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1088a855a4aa8d0a became leader at term 3"}
	{"level":"info","ts":"2024-09-06T19:42:21.510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1088a855a4aa8d0a elected leader 1088a855a4aa8d0a at term 3"}
	{"level":"info","ts":"2024-09-06T19:42:21.511Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"1088a855a4aa8d0a","local-member-attributes":"{Name:test-preload-767830 ClientURLs:[https://192.168.39.40:2379]}","request-path":"/0/members/1088a855a4aa8d0a/attributes","cluster-id":"ca485a4cd00ef8c5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:42:21.511Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:42:21.513Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.40:2379"}
	{"level":"info","ts":"2024-09-06T19:42:21.513Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:42:21.515Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:42:21.515Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:42:21.515Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:42:38 up 0 min,  0 users,  load average: 0.66, 0.17, 0.06
	Linux test-preload-767830 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [efc995296c64111ee3c9e3eb29c3a7992b4da499223ee4a130868fd6cf8a4eea] <==
	I0906 19:42:23.906993       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:42:23.907040       1 controller.go:83] Starting OpenAPI AggregationController
	I0906 19:42:23.875957       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:42:23.967150       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:42:23.967535       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:42:23.970234       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0906 19:42:23.970244       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0906 19:42:24.021618       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0906 19:42:24.070385       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0906 19:42:24.078059       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0906 19:42:24.088647       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0906 19:42:24.098425       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0906 19:42:24.103098       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:42:24.107405       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:42:24.107537       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:42:24.572567       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0906 19:42:24.883026       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 19:42:25.319954       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0906 19:42:25.331219       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0906 19:42:25.370875       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0906 19:42:25.389662       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:42:25.398407       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 19:42:25.793432       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0906 19:42:37.140573       1 controller.go:611] quota admission added evaluator for: endpoints
	I0906 19:42:37.154913       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [aab831b961cb9bcb3296dee5c64ea43afa9c415a329b7769d269c25dd9c09406] <==
	I0906 19:42:37.024417       1 shared_informer.go:262] Caches are synced for stateful set
	I0906 19:42:37.025637       1 shared_informer.go:262] Caches are synced for ephemeral
	I0906 19:42:37.028837       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0906 19:42:37.039254       1 shared_informer.go:262] Caches are synced for GC
	I0906 19:42:37.042394       1 shared_informer.go:262] Caches are synced for taint
	I0906 19:42:37.042593       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0906 19:42:37.042673       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0906 19:42:37.042724       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-767830. Assuming now as a timestamp.
	I0906 19:42:37.042783       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0906 19:42:37.043635       1 event.go:294] "Event occurred" object="test-preload-767830" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-767830 event: Registered Node test-preload-767830 in Controller"
	I0906 19:42:37.049909       1 shared_informer.go:262] Caches are synced for attach detach
	I0906 19:42:37.050002       1 shared_informer.go:262] Caches are synced for cronjob
	I0906 19:42:37.051928       1 shared_informer.go:262] Caches are synced for deployment
	I0906 19:42:37.055531       1 shared_informer.go:262] Caches are synced for job
	I0906 19:42:37.057593       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0906 19:42:37.130169       1 shared_informer.go:262] Caches are synced for endpoint
	I0906 19:42:37.147478       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0906 19:42:37.149370       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0906 19:42:37.246071       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 19:42:37.249452       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0906 19:42:37.249908       1 shared_informer.go:262] Caches are synced for crt configmap
	I0906 19:42:37.282123       1 shared_informer.go:262] Caches are synced for resource quota
	I0906 19:42:37.693087       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 19:42:37.721048       1 shared_informer.go:262] Caches are synced for garbage collector
	I0906 19:42:37.721084       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [f27f77b0065ca361dd94c1bead3f0c103e3817ad5af95044561644ea86ee322c] <==
	I0906 19:42:25.742854       1 node.go:163] Successfully retrieved node IP: 192.168.39.40
	I0906 19:42:25.743111       1 server_others.go:138] "Detected node IP" address="192.168.39.40"
	I0906 19:42:25.743232       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0906 19:42:25.776374       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0906 19:42:25.776404       1 server_others.go:206] "Using iptables Proxier"
	I0906 19:42:25.776669       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0906 19:42:25.776984       1 server.go:661] "Version info" version="v1.24.4"
	I0906 19:42:25.777009       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:42:25.778994       1 config.go:317] "Starting service config controller"
	I0906 19:42:25.783655       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0906 19:42:25.787514       1 config.go:226] "Starting endpoint slice config controller"
	I0906 19:42:25.787545       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0906 19:42:25.789409       1 config.go:444] "Starting node config controller"
	I0906 19:42:25.789519       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0906 19:42:25.887682       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0906 19:42:25.887745       1 shared_informer.go:262] Caches are synced for service config
	I0906 19:42:25.889697       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [318b57bbe4432b6240e678716e151bfdb62fed44b0334ffea8c01e5ff8b32857] <==
	I0906 19:42:20.495546       1 serving.go:348] Generated self-signed cert in-memory
	W0906 19:42:23.984478       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:42:23.984670       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:42:23.984764       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:42:23.984849       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:42:24.039758       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0906 19:42:24.039867       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:42:24.045052       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0906 19:42:24.045266       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:42:24.045364       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:42:24.047996       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0906 19:42:24.145576       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.560859    1137 apiserver.go:52] "Watching apiserver"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.567408    1137 topology_manager.go:200] "Topology Admit Handler"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.567519    1137 topology_manager.go:200] "Topology Admit Handler"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.567553    1137 topology_manager.go:200] "Topology Admit Handler"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.567610    1137 topology_manager.go:200] "Topology Admit Handler"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: E0906 19:42:24.569573    1137 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fj2s4" podUID=1e053de1-4bae-4c74-9f3d-b0dbeb917f0e
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.629663    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzblf\" (UniqueName: \"kubernetes.io/projected/9e66f5fd-16d4-40c1-a952-2fe249eacf16-kube-api-access-vzblf\") pod \"kube-proxy-qfgzq\" (UID: \"9e66f5fd-16d4-40c1-a952-2fe249eacf16\") " pod="kube-system/kube-proxy-qfgzq"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.629704    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfp8v\" (UniqueName: \"kubernetes.io/projected/522a0f5c-02db-4714-b8ec-2280563356aa-kube-api-access-nfp8v\") pod \"storage-provisioner\" (UID: \"522a0f5c-02db-4714-b8ec-2280563356aa\") " pod="kube-system/storage-provisioner"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.629730    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e66f5fd-16d4-40c1-a952-2fe249eacf16-lib-modules\") pod \"kube-proxy-qfgzq\" (UID: \"9e66f5fd-16d4-40c1-a952-2fe249eacf16\") " pod="kube-system/kube-proxy-qfgzq"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.629757    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb6b4\" (UniqueName: \"kubernetes.io/projected/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-kube-api-access-wb6b4\") pod \"coredns-6d4b75cb6d-fj2s4\" (UID: \"1e053de1-4bae-4c74-9f3d-b0dbeb917f0e\") " pod="kube-system/coredns-6d4b75cb6d-fj2s4"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.629777    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9e66f5fd-16d4-40c1-a952-2fe249eacf16-kube-proxy\") pod \"kube-proxy-qfgzq\" (UID: \"9e66f5fd-16d4-40c1-a952-2fe249eacf16\") " pod="kube-system/kube-proxy-qfgzq"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.629795    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/522a0f5c-02db-4714-b8ec-2280563356aa-tmp\") pod \"storage-provisioner\" (UID: \"522a0f5c-02db-4714-b8ec-2280563356aa\") " pod="kube-system/storage-provisioner"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.629812    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e66f5fd-16d4-40c1-a952-2fe249eacf16-xtables-lock\") pod \"kube-proxy-qfgzq\" (UID: \"9e66f5fd-16d4-40c1-a952-2fe249eacf16\") " pod="kube-system/kube-proxy-qfgzq"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.629835    1137 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-config-volume\") pod \"coredns-6d4b75cb6d-fj2s4\" (UID: \"1e053de1-4bae-4c74-9f3d-b0dbeb917f0e\") " pod="kube-system/coredns-6d4b75cb6d-fj2s4"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: I0906 19:42:24.629845    1137 reconciler.go:159] "Reconciler: start to sync state"
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: E0906 19:42:24.735243    1137 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 06 19:42:24 test-preload-767830 kubelet[1137]: E0906 19:42:24.735439    1137 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-config-volume podName:1e053de1-4bae-4c74-9f3d-b0dbeb917f0e nodeName:}" failed. No retries permitted until 2024-09-06 19:42:25.235393736 +0000 UTC m=+6.817594262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-config-volume") pod "coredns-6d4b75cb6d-fj2s4" (UID: "1e053de1-4bae-4c74-9f3d-b0dbeb917f0e") : object "kube-system"/"coredns" not registered
	Sep 06 19:42:25 test-preload-767830 kubelet[1137]: E0906 19:42:25.237137    1137 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 06 19:42:25 test-preload-767830 kubelet[1137]: E0906 19:42:25.237221    1137 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-config-volume podName:1e053de1-4bae-4c74-9f3d-b0dbeb917f0e nodeName:}" failed. No retries permitted until 2024-09-06 19:42:26.237205839 +0000 UTC m=+7.819406343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-config-volume") pod "coredns-6d4b75cb6d-fj2s4" (UID: "1e053de1-4bae-4c74-9f3d-b0dbeb917f0e") : object "kube-system"/"coredns" not registered
	Sep 06 19:42:26 test-preload-767830 kubelet[1137]: E0906 19:42:26.243037    1137 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 06 19:42:26 test-preload-767830 kubelet[1137]: E0906 19:42:26.243131    1137 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-config-volume podName:1e053de1-4bae-4c74-9f3d-b0dbeb917f0e nodeName:}" failed. No retries permitted until 2024-09-06 19:42:28.243108298 +0000 UTC m=+9.825308803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-config-volume") pod "coredns-6d4b75cb6d-fj2s4" (UID: "1e053de1-4bae-4c74-9f3d-b0dbeb917f0e") : object "kube-system"/"coredns" not registered
	Sep 06 19:42:26 test-preload-767830 kubelet[1137]: E0906 19:42:26.668569    1137 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-fj2s4" podUID=1e053de1-4bae-4c74-9f3d-b0dbeb917f0e
	Sep 06 19:42:28 test-preload-767830 kubelet[1137]: E0906 19:42:28.258725    1137 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 06 19:42:28 test-preload-767830 kubelet[1137]: E0906 19:42:28.258831    1137 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-config-volume podName:1e053de1-4bae-4c74-9f3d-b0dbeb917f0e nodeName:}" failed. No retries permitted until 2024-09-06 19:42:32.258813926 +0000 UTC m=+13.841014431 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1e053de1-4bae-4c74-9f3d-b0dbeb917f0e-config-volume") pod "coredns-6d4b75cb6d-fj2s4" (UID: "1e053de1-4bae-4c74-9f3d-b0dbeb917f0e") : object "kube-system"/"coredns" not registered
	Sep 06 19:42:28 test-preload-767830 kubelet[1137]: I0906 19:42:28.671871    1137 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=ddf29a0e-5c59-4846-b22a-ccc9f890f9b6 path="/var/lib/kubelet/pods/ddf29a0e-5c59-4846-b22a-ccc9f890f9b6/volumes"
	
	
	==> storage-provisioner [656dd77caeac489777fc82449c971973b4379a962732f1e868e0b7d83d60da3e] <==
	I0906 19:42:25.460297       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-767830 -n test-preload-767830
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-767830 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-767830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-767830
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-767830: (1.13136999s)
--- FAIL: TestPreload (162.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (435.01s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-959423 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-959423 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m28.901277556s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-959423] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-959423" primary control-plane node in "kubernetes-upgrade-959423" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:44:33.012960   49987 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:44:33.013279   49987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:44:33.013292   49987 out.go:358] Setting ErrFile to fd 2...
	I0906 19:44:33.013297   49987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:44:33.013458   49987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:44:33.014059   49987 out.go:352] Setting JSON to false
	I0906 19:44:33.015126   49987 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5222,"bootTime":1725646651,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:44:33.015181   49987 start.go:139] virtualization: kvm guest
	I0906 19:44:33.017307   49987 out.go:177] * [kubernetes-upgrade-959423] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:44:33.019365   49987 notify.go:220] Checking for updates...
	I0906 19:44:33.019744   49987 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:44:33.021060   49987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:44:33.023203   49987 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:44:33.024383   49987 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:44:33.025874   49987 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:44:33.028049   49987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:44:33.029440   49987 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:44:33.067590   49987 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 19:44:33.069077   49987 start.go:297] selected driver: kvm2
	I0906 19:44:33.069090   49987 start.go:901] validating driver "kvm2" against <nil>
	I0906 19:44:33.069111   49987 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:44:33.069854   49987 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:44:33.069949   49987 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:44:33.086624   49987 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:44:33.086681   49987 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 19:44:33.086946   49987 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 19:44:33.086979   49987 cni.go:84] Creating CNI manager for ""
	I0906 19:44:33.086990   49987 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:44:33.087002   49987 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 19:44:33.087060   49987 start.go:340] cluster config:
	{Name:kubernetes-upgrade-959423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-959423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:44:33.087186   49987 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:44:33.089587   49987 out.go:177] * Starting "kubernetes-upgrade-959423" primary control-plane node in "kubernetes-upgrade-959423" cluster
	I0906 19:44:33.091110   49987 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 19:44:33.091143   49987 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:44:33.091152   49987 cache.go:56] Caching tarball of preloaded images
	I0906 19:44:33.091249   49987 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:44:33.091263   49987 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0906 19:44:33.091535   49987 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/config.json ...
	I0906 19:44:33.091555   49987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/config.json: {Name:mk72c1823390a6d6e7ed22d009e771930eb6a8c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:44:33.091712   49987 start.go:360] acquireMachinesLock for kubernetes-upgrade-959423: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:44:33.091744   49987 start.go:364] duration metric: took 18.868µs to acquireMachinesLock for "kubernetes-upgrade-959423"
	I0906 19:44:33.091767   49987 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-959423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-959423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 19:44:33.091818   49987 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 19:44:33.093641   49987 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 19:44:33.093761   49987 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:44:33.093802   49987 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:44:33.112621   49987 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
	I0906 19:44:33.113090   49987 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:44:33.113677   49987 main.go:141] libmachine: Using API Version  1
	I0906 19:44:33.113711   49987 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:44:33.114142   49987 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:44:33.114337   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetMachineName
	I0906 19:44:33.114505   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:44:33.114656   49987 start.go:159] libmachine.API.Create for "kubernetes-upgrade-959423" (driver="kvm2")
	I0906 19:44:33.114700   49987 client.go:168] LocalClient.Create starting
	I0906 19:44:33.114730   49987 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 19:44:33.114761   49987 main.go:141] libmachine: Decoding PEM data...
	I0906 19:44:33.114776   49987 main.go:141] libmachine: Parsing certificate...
	I0906 19:44:33.114846   49987 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 19:44:33.114876   49987 main.go:141] libmachine: Decoding PEM data...
	I0906 19:44:33.114892   49987 main.go:141] libmachine: Parsing certificate...
	I0906 19:44:33.114923   49987 main.go:141] libmachine: Running pre-create checks...
	I0906 19:44:33.114933   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .PreCreateCheck
	I0906 19:44:33.115337   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetConfigRaw
	I0906 19:44:33.115736   49987 main.go:141] libmachine: Creating machine...
	I0906 19:44:33.115747   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .Create
	I0906 19:44:33.115870   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Creating KVM machine...
	I0906 19:44:33.117153   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found existing default KVM network
	I0906 19:44:33.118012   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:33.117847   50049 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1c0}
	I0906 19:44:33.118034   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | created network xml: 
	I0906 19:44:33.118049   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | <network>
	I0906 19:44:33.118068   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG |   <name>mk-kubernetes-upgrade-959423</name>
	I0906 19:44:33.118086   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG |   <dns enable='no'/>
	I0906 19:44:33.118093   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG |   
	I0906 19:44:33.118105   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0906 19:44:33.118113   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG |     <dhcp>
	I0906 19:44:33.118133   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0906 19:44:33.118146   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG |     </dhcp>
	I0906 19:44:33.118158   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG |   </ip>
	I0906 19:44:33.118168   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG |   
	I0906 19:44:33.118178   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | </network>
	I0906 19:44:33.118188   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | 
	I0906 19:44:33.124176   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | trying to create private KVM network mk-kubernetes-upgrade-959423 192.168.39.0/24...
	I0906 19:44:33.203049   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | private KVM network mk-kubernetes-upgrade-959423 192.168.39.0/24 created
	I0906 19:44:33.203077   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:33.203018   50049 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:44:33.203091   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423 ...
	I0906 19:44:33.203114   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 19:44:33.203186   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 19:44:33.470004   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:33.469849   50049 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa...
	I0906 19:44:33.640002   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:33.639893   50049 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/kubernetes-upgrade-959423.rawdisk...
	I0906 19:44:33.640035   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Writing magic tar header
	I0906 19:44:33.640067   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Writing SSH key tar header
	I0906 19:44:33.640080   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:33.640012   50049 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423 ...
	I0906 19:44:33.640097   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423
	I0906 19:44:33.640120   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423 (perms=drwx------)
	I0906 19:44:33.640148   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 19:44:33.640162   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:44:33.640174   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 19:44:33.640194   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 19:44:33.640209   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 19:44:33.640225   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 19:44:33.640236   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 19:44:33.640246   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 19:44:33.640258   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Creating domain...
	I0906 19:44:33.640269   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 19:44:33.640277   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Checking permissions on dir: /home/jenkins
	I0906 19:44:33.640287   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Checking permissions on dir: /home
	I0906 19:44:33.640296   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Skipping /home - not owner
	I0906 19:44:33.641389   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) define libvirt domain using xml: 
	I0906 19:44:33.641416   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) <domain type='kvm'>
	I0906 19:44:33.641428   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   <name>kubernetes-upgrade-959423</name>
	I0906 19:44:33.641438   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   <memory unit='MiB'>2200</memory>
	I0906 19:44:33.641447   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   <vcpu>2</vcpu>
	I0906 19:44:33.641458   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   <features>
	I0906 19:44:33.641467   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <acpi/>
	I0906 19:44:33.641477   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <apic/>
	I0906 19:44:33.641500   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <pae/>
	I0906 19:44:33.641520   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     
	I0906 19:44:33.641619   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   </features>
	I0906 19:44:33.641651   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   <cpu mode='host-passthrough'>
	I0906 19:44:33.641663   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   
	I0906 19:44:33.641674   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   </cpu>
	I0906 19:44:33.641684   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   <os>
	I0906 19:44:33.641695   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <type>hvm</type>
	I0906 19:44:33.641707   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <boot dev='cdrom'/>
	I0906 19:44:33.641718   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <boot dev='hd'/>
	I0906 19:44:33.641730   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <bootmenu enable='no'/>
	I0906 19:44:33.641741   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   </os>
	I0906 19:44:33.641752   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   <devices>
	I0906 19:44:33.641763   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <disk type='file' device='cdrom'>
	I0906 19:44:33.641776   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/boot2docker.iso'/>
	I0906 19:44:33.641787   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <target dev='hdc' bus='scsi'/>
	I0906 19:44:33.641800   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <readonly/>
	I0906 19:44:33.641813   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     </disk>
	I0906 19:44:33.641825   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <disk type='file' device='disk'>
	I0906 19:44:33.641838   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 19:44:33.641859   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/kubernetes-upgrade-959423.rawdisk'/>
	I0906 19:44:33.641870   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <target dev='hda' bus='virtio'/>
	I0906 19:44:33.641880   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     </disk>
	I0906 19:44:33.641895   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <interface type='network'>
	I0906 19:44:33.641909   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <source network='mk-kubernetes-upgrade-959423'/>
	I0906 19:44:33.641920   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <model type='virtio'/>
	I0906 19:44:33.641931   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     </interface>
	I0906 19:44:33.641942   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <interface type='network'>
	I0906 19:44:33.641961   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <source network='default'/>
	I0906 19:44:33.641978   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <model type='virtio'/>
	I0906 19:44:33.641989   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     </interface>
	I0906 19:44:33.641999   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <serial type='pty'>
	I0906 19:44:33.642007   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <target port='0'/>
	I0906 19:44:33.642017   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     </serial>
	I0906 19:44:33.642029   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <console type='pty'>
	I0906 19:44:33.642040   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <target type='serial' port='0'/>
	I0906 19:44:33.642051   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     </console>
	I0906 19:44:33.642063   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     <rng model='virtio'>
	I0906 19:44:33.642075   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)       <backend model='random'>/dev/random</backend>
	I0906 19:44:33.642086   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     </rng>
	I0906 19:44:33.642097   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     
	I0906 19:44:33.642109   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)     
	I0906 19:44:33.642125   49987 main.go:141] libmachine: (kubernetes-upgrade-959423)   </devices>
	I0906 19:44:33.642147   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) </domain>
	I0906 19:44:33.642165   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) 
	I0906 19:44:33.646496   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:c9:b0:43 in network default
	I0906 19:44:33.647261   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Ensuring networks are active...
	I0906 19:44:33.647295   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:33.648058   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Ensuring network default is active
	I0906 19:44:33.648425   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Ensuring network mk-kubernetes-upgrade-959423 is active
	I0906 19:44:33.649070   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Getting domain xml...
	I0906 19:44:33.649928   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Creating domain...
	I0906 19:44:34.853675   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Waiting to get IP...
	I0906 19:44:34.854497   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:34.854843   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:34.854892   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:34.854839   50049 retry.go:31] will retry after 208.312033ms: waiting for machine to come up
	I0906 19:44:35.065255   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:35.065679   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:35.065713   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:35.065634   50049 retry.go:31] will retry after 249.855079ms: waiting for machine to come up
	I0906 19:44:35.317119   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:35.317518   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:35.317549   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:35.317469   50049 retry.go:31] will retry after 420.619708ms: waiting for machine to come up
	I0906 19:44:35.740084   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:35.740528   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:35.740551   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:35.740493   50049 retry.go:31] will retry after 385.658387ms: waiting for machine to come up
	I0906 19:44:36.127784   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:36.128227   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:36.128271   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:36.128188   50049 retry.go:31] will retry after 560.995965ms: waiting for machine to come up
	I0906 19:44:36.690874   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:36.691293   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:36.691319   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:36.691234   50049 retry.go:31] will retry after 723.93351ms: waiting for machine to come up
	I0906 19:44:37.417140   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:37.417514   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:37.417549   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:37.417468   50049 retry.go:31] will retry after 751.17445ms: waiting for machine to come up
	I0906 19:44:38.169877   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:38.170327   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:38.170358   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:38.170254   50049 retry.go:31] will retry after 903.74175ms: waiting for machine to come up
	I0906 19:44:39.075517   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:39.075952   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:39.075982   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:39.075920   50049 retry.go:31] will retry after 1.408982456s: waiting for machine to come up
	I0906 19:44:40.486047   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:40.486505   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:40.486534   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:40.486447   50049 retry.go:31] will retry after 1.773912367s: waiting for machine to come up
	I0906 19:44:42.262021   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:42.262456   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:42.262489   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:42.262400   50049 retry.go:31] will retry after 1.86554428s: waiting for machine to come up
	I0906 19:44:44.129642   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:44.130110   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:44.130134   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:44.130081   50049 retry.go:31] will retry after 3.125751706s: waiting for machine to come up
	I0906 19:44:47.259253   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:47.259633   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:47.259658   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:47.259599   50049 retry.go:31] will retry after 3.99519796s: waiting for machine to come up
	I0906 19:44:51.258803   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:51.259212   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find current IP address of domain kubernetes-upgrade-959423 in network mk-kubernetes-upgrade-959423
	I0906 19:44:51.259237   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | I0906 19:44:51.259162   50049 retry.go:31] will retry after 4.735602488s: waiting for machine to come up
	I0906 19:44:55.997923   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:55.998360   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has current primary IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:55.998384   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Found IP for machine: 192.168.39.27
	I0906 19:44:55.998402   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Reserving static IP address...
	I0906 19:44:55.998736   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-959423", mac: "52:54:00:42:7b:80", ip: "192.168.39.27"} in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.072583   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Getting to WaitForSSH function...
	I0906 19:44:56.072612   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Reserved static IP address: 192.168.39.27
	I0906 19:44:56.072625   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Waiting for SSH to be available...
	I0906 19:44:56.075380   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.075735   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:minikube Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:56.075763   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.075955   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Using SSH client type: external
	I0906 19:44:56.075985   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa (-rw-------)
	I0906 19:44:56.076017   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 19:44:56.076032   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | About to run SSH command:
	I0906 19:44:56.076054   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | exit 0
	I0906 19:44:56.201091   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | SSH cmd err, output: <nil>: 
	I0906 19:44:56.201331   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) KVM machine creation complete!
	I0906 19:44:56.201616   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetConfigRaw
	I0906 19:44:56.202169   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:44:56.202372   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:44:56.202498   49987 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 19:44:56.202515   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetState
	I0906 19:44:56.203667   49987 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 19:44:56.203687   49987 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 19:44:56.203696   49987 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 19:44:56.203708   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:56.206021   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.206419   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:56.206447   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.206618   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:56.206781   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.206943   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.207073   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:56.207224   49987 main.go:141] libmachine: Using SSH client type: native
	I0906 19:44:56.207409   49987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0906 19:44:56.207421   49987 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 19:44:56.312251   49987 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:44:56.312278   49987 main.go:141] libmachine: Detecting the provisioner...
	I0906 19:44:56.312300   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:56.314934   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.315286   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:56.315322   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.315446   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:56.315647   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.315810   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.315984   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:56.316170   49987 main.go:141] libmachine: Using SSH client type: native
	I0906 19:44:56.316387   49987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0906 19:44:56.316402   49987 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 19:44:56.421599   49987 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 19:44:56.421698   49987 main.go:141] libmachine: found compatible host: buildroot
	I0906 19:44:56.421712   49987 main.go:141] libmachine: Provisioning with buildroot...
	I0906 19:44:56.421724   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetMachineName
	I0906 19:44:56.421971   49987 buildroot.go:166] provisioning hostname "kubernetes-upgrade-959423"
	I0906 19:44:56.421993   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetMachineName
	I0906 19:44:56.422170   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:56.424601   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.424908   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:56.424946   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.425099   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:56.425333   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.425497   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.425671   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:56.425835   49987 main.go:141] libmachine: Using SSH client type: native
	I0906 19:44:56.425992   49987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0906 19:44:56.426004   49987 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-959423 && echo "kubernetes-upgrade-959423" | sudo tee /etc/hostname
	I0906 19:44:56.546862   49987 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-959423
	
	I0906 19:44:56.546889   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:56.549654   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.550043   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:56.550065   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.550266   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:56.550450   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.550609   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.550714   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:56.550858   49987 main.go:141] libmachine: Using SSH client type: native
	I0906 19:44:56.551024   49987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0906 19:44:56.551039   49987 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-959423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-959423/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-959423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:44:56.667673   49987 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:44:56.667704   49987 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:44:56.667735   49987 buildroot.go:174] setting up certificates
	I0906 19:44:56.667745   49987 provision.go:84] configureAuth start
	I0906 19:44:56.667755   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetMachineName
	I0906 19:44:56.668009   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetIP
	I0906 19:44:56.671012   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.671333   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:56.671361   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.671571   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:56.673979   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.674174   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:56.674196   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.674374   49987 provision.go:143] copyHostCerts
	I0906 19:44:56.674426   49987 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:44:56.674451   49987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:44:56.674535   49987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:44:56.674650   49987 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:44:56.674662   49987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:44:56.674697   49987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:44:56.674767   49987 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:44:56.674776   49987 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:44:56.674810   49987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:44:56.674870   49987 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-959423 san=[127.0.0.1 192.168.39.27 kubernetes-upgrade-959423 localhost minikube]
	I0906 19:44:56.828043   49987 provision.go:177] copyRemoteCerts
	I0906 19:44:56.828095   49987 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:44:56.828117   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:56.830619   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.830915   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:56.830956   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.831137   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:56.831373   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.831520   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:56.831656   49987 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa Username:docker}
	I0906 19:44:56.914814   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:44:56.939140   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0906 19:44:56.962405   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 19:44:56.987008   49987 provision.go:87] duration metric: took 319.251084ms to configureAuth
	I0906 19:44:56.987035   49987 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:44:56.987186   49987 config.go:182] Loaded profile config "kubernetes-upgrade-959423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 19:44:56.987263   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:56.989777   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.990089   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:56.990119   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:56.990287   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:56.990467   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.990661   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:56.990797   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:56.990940   49987 main.go:141] libmachine: Using SSH client type: native
	I0906 19:44:56.991097   49987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0906 19:44:56.991113   49987 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:44:57.213422   49987 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:44:57.213484   49987 main.go:141] libmachine: Checking connection to Docker...
	I0906 19:44:57.213498   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetURL
	I0906 19:44:57.214674   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | Using libvirt version 6000000
	I0906 19:44:57.216552   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.216885   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:57.216915   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.217081   49987 main.go:141] libmachine: Docker is up and running!
	I0906 19:44:57.217095   49987 main.go:141] libmachine: Reticulating splines...
	I0906 19:44:57.217102   49987 client.go:171] duration metric: took 24.102394865s to LocalClient.Create
	I0906 19:44:57.217121   49987 start.go:167] duration metric: took 24.102466556s to libmachine.API.Create "kubernetes-upgrade-959423"
	I0906 19:44:57.217134   49987 start.go:293] postStartSetup for "kubernetes-upgrade-959423" (driver="kvm2")
	I0906 19:44:57.217149   49987 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:44:57.217170   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:44:57.217403   49987 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:44:57.217432   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:57.219254   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.219602   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:57.219639   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.219749   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:57.219902   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:57.220049   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:57.220163   49987 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa Username:docker}
	I0906 19:44:57.303003   49987 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:44:57.307400   49987 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:44:57.307426   49987 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:44:57.307501   49987 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:44:57.307607   49987 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:44:57.307720   49987 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:44:57.317692   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:44:57.342333   49987 start.go:296] duration metric: took 125.185384ms for postStartSetup
	I0906 19:44:57.342381   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetConfigRaw
	I0906 19:44:57.342984   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetIP
	I0906 19:44:57.345461   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.345814   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:57.345843   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.346012   49987 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/config.json ...
	I0906 19:44:57.346240   49987 start.go:128] duration metric: took 24.254412001s to createHost
	I0906 19:44:57.346263   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:57.348324   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.348672   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:57.348713   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.348778   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:57.348951   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:57.349093   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:57.349220   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:57.349381   49987 main.go:141] libmachine: Using SSH client type: native
	I0906 19:44:57.349575   49987 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0906 19:44:57.349589   49987 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:44:57.457707   49987 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725651897.431542737
	
	I0906 19:44:57.457737   49987 fix.go:216] guest clock: 1725651897.431542737
	I0906 19:44:57.457747   49987 fix.go:229] Guest: 2024-09-06 19:44:57.431542737 +0000 UTC Remote: 2024-09-06 19:44:57.346253435 +0000 UTC m=+24.376908391 (delta=85.289302ms)
	I0906 19:44:57.457769   49987 fix.go:200] guest clock delta is within tolerance: 85.289302ms
	I0906 19:44:57.457783   49987 start.go:83] releasing machines lock for "kubernetes-upgrade-959423", held for 24.366028473s
	I0906 19:44:57.457816   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:44:57.458076   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetIP
	I0906 19:44:57.460843   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.461194   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:57.461227   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.461336   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:44:57.461845   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:44:57.462029   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:44:57.462127   49987 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:44:57.462165   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:57.462265   49987 ssh_runner.go:195] Run: cat /version.json
	I0906 19:44:57.462289   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:44:57.464791   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.465193   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:57.465223   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.465389   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.465503   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:57.465684   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:57.465771   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:57.465805   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:57.465822   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:57.465936   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:44:57.466006   49987 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa Username:docker}
	I0906 19:44:57.466068   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:44:57.466185   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:44:57.466352   49987 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa Username:docker}
	I0906 19:44:57.550001   49987 ssh_runner.go:195] Run: systemctl --version
	I0906 19:44:57.574315   49987 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:44:57.736134   49987 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:44:57.742194   49987 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:44:57.742252   49987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:44:57.758286   49987 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 19:44:57.758316   49987 start.go:495] detecting cgroup driver to use...
	I0906 19:44:57.758378   49987 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:44:57.774527   49987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:44:57.788101   49987 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:44:57.788168   49987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:44:57.801778   49987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:44:57.815265   49987 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:44:57.934650   49987 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:44:58.090120   49987 docker.go:233] disabling docker service ...
	I0906 19:44:58.090181   49987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:44:58.113114   49987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:44:58.129117   49987 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:44:58.258685   49987 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:44:58.391727   49987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:44:58.409388   49987 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:44:58.430663   49987 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 19:44:58.430722   49987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:44:58.441880   49987 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:44:58.441960   49987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:44:58.454361   49987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:44:58.465624   49987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:44:58.477016   49987 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:44:58.488197   49987 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:44:58.497985   49987 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 19:44:58.498047   49987 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 19:44:58.513350   49987 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:44:58.525710   49987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:44:58.648815   49987 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:44:58.759630   49987 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:44:58.759701   49987 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:44:58.764831   49987 start.go:563] Will wait 60s for crictl version
	I0906 19:44:58.764909   49987 ssh_runner.go:195] Run: which crictl
	I0906 19:44:58.768926   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:44:58.813474   49987 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:44:58.813571   49987 ssh_runner.go:195] Run: crio --version
	I0906 19:44:58.848786   49987 ssh_runner.go:195] Run: crio --version
	I0906 19:44:58.879132   49987 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0906 19:44:58.880458   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetIP
	I0906 19:44:58.883179   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:58.883541   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:44:48 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:44:58.883564   49987 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:44:58.883872   49987 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 19:44:58.888746   49987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 19:44:58.902357   49987 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-959423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-959423 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.27 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:44:58.902460   49987 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 19:44:58.902510   49987 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:44:58.943941   49987 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 19:44:58.944007   49987 ssh_runner.go:195] Run: which lz4
	I0906 19:44:58.948130   49987 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 19:44:58.952603   49987 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 19:44:58.952626   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0906 19:45:00.632182   49987 crio.go:462] duration metric: took 1.684088034s to copy over tarball
	I0906 19:45:00.632255   49987 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 19:45:03.219132   49987 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.586847011s)
	I0906 19:45:03.219164   49987 crio.go:469] duration metric: took 2.586949575s to extract the tarball
	I0906 19:45:03.219173   49987 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 19:45:03.262498   49987 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:45:03.307613   49987 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 19:45:03.307638   49987 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 19:45:03.307716   49987 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:45:03.307844   49987 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:45:03.307760   49987 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:45:03.307738   49987 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:45:03.307764   49987 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 19:45:03.307784   49987 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0906 19:45:03.307790   49987 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:45:03.307790   49987 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 19:45:03.308962   49987 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:45:03.308974   49987 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:45:03.308973   49987 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:45:03.309044   49987 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:45:03.309071   49987 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 19:45:03.309131   49987 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 19:45:03.309372   49987 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:45:03.309625   49987 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 19:45:03.466196   49987 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0906 19:45:03.491263   49987 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0906 19:45:03.498574   49987 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:45:03.503597   49987 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:45:03.529307   49987 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 19:45:03.530192   49987 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:45:03.531476   49987 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0906 19:45:03.531514   49987 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0906 19:45:03.531551   49987 ssh_runner.go:195] Run: which crictl
	I0906 19:45:03.543356   49987 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:45:03.625270   49987 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0906 19:45:03.625311   49987 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0906 19:45:03.625318   49987 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0906 19:45:03.625356   49987 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:45:03.625370   49987 ssh_runner.go:195] Run: which crictl
	I0906 19:45:03.625421   49987 ssh_runner.go:195] Run: which crictl
	I0906 19:45:03.655446   49987 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0906 19:45:03.655491   49987 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:45:03.655530   49987 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 19:45:03.655544   49987 ssh_runner.go:195] Run: which crictl
	I0906 19:45:03.655564   49987 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 19:45:03.655603   49987 ssh_runner.go:195] Run: which crictl
	I0906 19:45:03.667835   49987 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0906 19:45:03.667880   49987 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:45:03.667895   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 19:45:03.667925   49987 ssh_runner.go:195] Run: which crictl
	I0906 19:45:03.672007   49987 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0906 19:45:03.672046   49987 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:45:03.672061   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:45:03.672091   49987 ssh_runner.go:195] Run: which crictl
	I0906 19:45:03.672096   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 19:45:03.672145   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:45:03.672172   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 19:45:03.675097   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:45:03.778422   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 19:45:03.808585   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:45:03.808616   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:45:03.808654   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 19:45:03.808725   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:45:03.808736   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 19:45:03.808802   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:45:03.839940   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 19:45:03.976912   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:45:03.977001   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 19:45:03.981928   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:45:03.981990   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:45:03.982021   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 19:45:03.981929   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:45:03.982068   49987 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0906 19:45:04.079826   49987 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:45:04.088317   49987 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0906 19:45:04.088385   49987 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 19:45:04.126848   49987 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:45:04.126883   49987 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0906 19:45:04.126928   49987 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0906 19:45:04.126944   49987 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0906 19:45:04.262690   49987 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0906 19:45:04.262759   49987 cache_images.go:92] duration metric: took 955.104869ms to LoadCachedImages
	W0906 19:45:04.262832   49987 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0906 19:45:04.262847   49987 kubeadm.go:934] updating node { 192.168.39.27 8443 v1.20.0 crio true true} ...
	I0906 19:45:04.262957   49987 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-959423 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-959423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:45:04.263013   49987 ssh_runner.go:195] Run: crio config
	I0906 19:45:04.320449   49987 cni.go:84] Creating CNI manager for ""
	I0906 19:45:04.320468   49987 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:45:04.320479   49987 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:45:04.320496   49987 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.27 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-959423 NodeName:kubernetes-upgrade-959423 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 19:45:04.320620   49987 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.27
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-959423"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:45:04.320675   49987 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0906 19:45:04.330820   49987 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:45:04.330894   49987 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 19:45:04.340323   49987 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0906 19:45:04.358779   49987 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:45:04.376522   49987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0906 19:45:04.394750   49987 ssh_runner.go:195] Run: grep 192.168.39.27	control-plane.minikube.internal$ /etc/hosts
	I0906 19:45:04.398830   49987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 19:45:04.411234   49987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:45:04.526158   49987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:45:04.553290   49987 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423 for IP: 192.168.39.27
	I0906 19:45:04.553324   49987 certs.go:194] generating shared ca certs ...
	I0906 19:45:04.553345   49987 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:45:04.553501   49987 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:45:04.553564   49987 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:45:04.553579   49987 certs.go:256] generating profile certs ...
	I0906 19:45:04.553642   49987 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/client.key
	I0906 19:45:04.553662   49987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/client.crt with IP's: []
	I0906 19:45:04.716221   49987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/client.crt ...
	I0906 19:45:04.716248   49987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/client.crt: {Name:mkf0d5f38a29f647d3e42a765aaaebcc7757a49b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:45:04.716428   49987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/client.key ...
	I0906 19:45:04.716447   49987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/client.key: {Name:mka6ea28c7467e90f509b525f010dde2aa7a8c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:45:04.716564   49987 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.key.04e217dd
	I0906 19:45:04.716591   49987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.crt.04e217dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.27]
	I0906 19:45:04.855179   49987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.crt.04e217dd ...
	I0906 19:45:04.855210   49987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.crt.04e217dd: {Name:mk504a1fa88ef1f7eeaa89608b63921ea5834b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:45:04.855379   49987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.key.04e217dd ...
	I0906 19:45:04.855401   49987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.key.04e217dd: {Name:mk8000bede31d04658c7ca0b2a77049269110ccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:45:04.855498   49987 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.crt.04e217dd -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.crt
	I0906 19:45:04.855599   49987 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.key.04e217dd -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.key
	I0906 19:45:04.855678   49987 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/proxy-client.key
	I0906 19:45:04.855698   49987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/proxy-client.crt with IP's: []
	I0906 19:45:04.921629   49987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/proxy-client.crt ...
	I0906 19:45:04.921658   49987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/proxy-client.crt: {Name:mkd28e486a8e4cfbfefdd5ae6fcc86225355c94b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:45:04.921816   49987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/proxy-client.key ...
	I0906 19:45:04.921835   49987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/proxy-client.key: {Name:mk54643497f5c98c79fa7eb3a8f1187fd0ef1522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:45:04.922040   49987 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:45:04.922088   49987 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:45:04.922110   49987 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:45:04.922141   49987 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:45:04.922169   49987 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:45:04.922213   49987 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:45:04.922270   49987 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:45:04.922935   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:45:04.950327   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:45:04.977697   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:45:05.005007   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:45:05.031790   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0906 19:45:05.061832   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 19:45:05.091729   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:45:05.116472   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kubernetes-upgrade-959423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 19:45:05.139420   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:45:05.166860   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:45:05.192817   49987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:45:05.217895   49987 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:45:05.234966   49987 ssh_runner.go:195] Run: openssl version
	I0906 19:45:05.241061   49987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:45:05.253275   49987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:45:05.258220   49987 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:45:05.258275   49987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:45:05.264295   49987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:45:05.275443   49987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:45:05.286900   49987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:45:05.291568   49987 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:45:05.291638   49987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:45:05.297704   49987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:45:05.310727   49987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:45:05.323550   49987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:45:05.328182   49987 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:45:05.328238   49987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:45:05.333949   49987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:45:05.344977   49987 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:45:05.349306   49987 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 19:45:05.349367   49987 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-959423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-959423 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.27 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:45:05.349454   49987 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:45:05.349494   49987 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:45:05.391333   49987 cri.go:89] found id: ""
	I0906 19:45:05.391421   49987 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 19:45:05.404522   49987 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 19:45:05.414205   49987 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 19:45:05.424574   49987 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 19:45:05.424597   49987 kubeadm.go:157] found existing configuration files:
	
	I0906 19:45:05.424650   49987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 19:45:05.433852   49987 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 19:45:05.433909   49987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 19:45:05.443711   49987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 19:45:05.460040   49987 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 19:45:05.460108   49987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 19:45:05.472453   49987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 19:45:05.489964   49987 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 19:45:05.490021   49987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 19:45:05.503844   49987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 19:45:05.514195   49987 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 19:45:05.514261   49987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 19:45:05.525423   49987 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 19:45:05.654706   49987 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 19:45:05.654770   49987 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 19:45:05.794227   49987 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 19:45:05.794401   49987 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 19:45:05.794537   49987 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 19:45:05.994040   49987 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 19:45:06.052633   49987 out.go:235]   - Generating certificates and keys ...
	I0906 19:45:06.052746   49987 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 19:45:06.052831   49987 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 19:45:06.340492   49987 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 19:45:06.386164   49987 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 19:45:06.496647   49987 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 19:45:06.613609   49987 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 19:45:06.721475   49987 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 19:45:06.721826   49987 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-959423 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	I0906 19:45:06.853848   49987 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 19:45:06.854090   49987 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-959423 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	I0906 19:45:07.060668   49987 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 19:45:07.140217   49987 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 19:45:07.262183   49987 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 19:45:07.262351   49987 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 19:45:07.324526   49987 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 19:45:07.910042   49987 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 19:45:08.086761   49987 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 19:45:08.151690   49987 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 19:45:08.170528   49987 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 19:45:08.173479   49987 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 19:45:08.173603   49987 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 19:45:08.296686   49987 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 19:45:08.298697   49987 out.go:235]   - Booting up control plane ...
	I0906 19:45:08.298836   49987 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 19:45:08.303291   49987 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 19:45:08.304997   49987 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 19:45:08.305887   49987 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 19:45:08.310228   49987 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 19:45:48.303750   49987 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 19:45:48.303875   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:45:48.304144   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:45:53.304542   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:45:53.304812   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:46:03.305053   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:46:03.305346   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:46:23.303516   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:46:23.303800   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:47:03.305698   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:47:03.305907   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:47:03.305917   49987 kubeadm.go:310] 
	I0906 19:47:03.305958   49987 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 19:47:03.306061   49987 kubeadm.go:310] 		timed out waiting for the condition
	I0906 19:47:03.306084   49987 kubeadm.go:310] 
	I0906 19:47:03.306143   49987 kubeadm.go:310] 	This error is likely caused by:
	I0906 19:47:03.306201   49987 kubeadm.go:310] 		- The kubelet is not running
	I0906 19:47:03.306353   49987 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 19:47:03.306367   49987 kubeadm.go:310] 
	I0906 19:47:03.306508   49987 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 19:47:03.306569   49987 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 19:47:03.306635   49987 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 19:47:03.306654   49987 kubeadm.go:310] 
	I0906 19:47:03.306808   49987 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 19:47:03.306904   49987 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 19:47:03.306935   49987 kubeadm.go:310] 
	I0906 19:47:03.307100   49987 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 19:47:03.307233   49987 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 19:47:03.307303   49987 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 19:47:03.307392   49987 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 19:47:03.307402   49987 kubeadm.go:310] 
	I0906 19:47:03.307605   49987 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 19:47:03.307753   49987 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 19:47:03.307866   49987 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0906 19:47:03.308017   49987 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-959423 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-959423 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-959423 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-959423 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 19:47:03.308067   49987 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 19:47:04.335613   49987 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.027519487s)
	I0906 19:47:04.335709   49987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:47:04.350113   49987 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 19:47:04.360218   49987 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 19:47:04.360243   49987 kubeadm.go:157] found existing configuration files:
	
	I0906 19:47:04.360308   49987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 19:47:04.369499   49987 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 19:47:04.369564   49987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 19:47:04.379189   49987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 19:47:04.388451   49987 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 19:47:04.388512   49987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 19:47:04.398611   49987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 19:47:04.410628   49987 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 19:47:04.410692   49987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 19:47:04.423679   49987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 19:47:04.435934   49987 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 19:47:04.435997   49987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 19:47:04.448812   49987 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 19:47:04.552705   49987 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 19:47:04.552796   49987 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 19:47:04.724594   49987 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 19:47:04.724729   49987 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 19:47:04.724845   49987 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 19:47:04.987212   49987 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 19:47:04.989951   49987 out.go:235]   - Generating certificates and keys ...
	I0906 19:47:04.990069   49987 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 19:47:04.990136   49987 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 19:47:04.990216   49987 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 19:47:04.990286   49987 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 19:47:04.990356   49987 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 19:47:04.990571   49987 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 19:47:04.990753   49987 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 19:47:04.991215   49987 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 19:47:04.991783   49987 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 19:47:04.992131   49987 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 19:47:04.992277   49987 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 19:47:04.992352   49987 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 19:47:05.103602   49987 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 19:47:05.343185   49987 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 19:47:05.523352   49987 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 19:47:05.623412   49987 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 19:47:05.660364   49987 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 19:47:05.662411   49987 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 19:47:05.662630   49987 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 19:47:05.941048   49987 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 19:47:05.943556   49987 out.go:235]   - Booting up control plane ...
	I0906 19:47:05.943669   49987 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 19:47:05.952339   49987 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 19:47:05.954009   49987 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 19:47:05.954977   49987 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 19:47:05.957934   49987 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 19:47:45.961015   49987 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 19:47:45.961265   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:47:45.961541   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:47:50.962321   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:47:50.962566   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:48:00.963108   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:48:00.963358   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:48:20.962143   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:48:20.962344   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:49:00.961915   49987 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:49:00.962182   49987 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:49:00.962195   49987 kubeadm.go:310] 
	I0906 19:49:00.962241   49987 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 19:49:00.962276   49987 kubeadm.go:310] 		timed out waiting for the condition
	I0906 19:49:00.962283   49987 kubeadm.go:310] 
	I0906 19:49:00.962338   49987 kubeadm.go:310] 	This error is likely caused by:
	I0906 19:49:00.962376   49987 kubeadm.go:310] 		- The kubelet is not running
	I0906 19:49:00.962497   49987 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 19:49:00.962516   49987 kubeadm.go:310] 
	I0906 19:49:00.962635   49987 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 19:49:00.962668   49987 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 19:49:00.962698   49987 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 19:49:00.962706   49987 kubeadm.go:310] 
	I0906 19:49:00.962871   49987 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 19:49:00.963009   49987 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 19:49:00.963021   49987 kubeadm.go:310] 
	I0906 19:49:00.963188   49987 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 19:49:00.963323   49987 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 19:49:00.963406   49987 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 19:49:00.963495   49987 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 19:49:00.963511   49987 kubeadm.go:310] 
	I0906 19:49:00.964542   49987 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 19:49:00.964649   49987 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 19:49:00.964730   49987 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 19:49:00.964803   49987 kubeadm.go:394] duration metric: took 3m55.615444971s to StartCluster
	I0906 19:49:00.964852   49987 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 19:49:00.964941   49987 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 19:49:01.024021   49987 cri.go:89] found id: ""
	I0906 19:49:01.024055   49987 logs.go:276] 0 containers: []
	W0906 19:49:01.024067   49987 logs.go:278] No container was found matching "kube-apiserver"
	I0906 19:49:01.024075   49987 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 19:49:01.024146   49987 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 19:49:01.066819   49987 cri.go:89] found id: ""
	I0906 19:49:01.066850   49987 logs.go:276] 0 containers: []
	W0906 19:49:01.066860   49987 logs.go:278] No container was found matching "etcd"
	I0906 19:49:01.066868   49987 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 19:49:01.066929   49987 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 19:49:01.112603   49987 cri.go:89] found id: ""
	I0906 19:49:01.112629   49987 logs.go:276] 0 containers: []
	W0906 19:49:01.112640   49987 logs.go:278] No container was found matching "coredns"
	I0906 19:49:01.112648   49987 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 19:49:01.112717   49987 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 19:49:01.151560   49987 cri.go:89] found id: ""
	I0906 19:49:01.151593   49987 logs.go:276] 0 containers: []
	W0906 19:49:01.151604   49987 logs.go:278] No container was found matching "kube-scheduler"
	I0906 19:49:01.151612   49987 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 19:49:01.151672   49987 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 19:49:01.201191   49987 cri.go:89] found id: ""
	I0906 19:49:01.201222   49987 logs.go:276] 0 containers: []
	W0906 19:49:01.201234   49987 logs.go:278] No container was found matching "kube-proxy"
	I0906 19:49:01.201242   49987 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 19:49:01.201317   49987 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 19:49:01.238929   49987 cri.go:89] found id: ""
	I0906 19:49:01.238954   49987 logs.go:276] 0 containers: []
	W0906 19:49:01.238965   49987 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 19:49:01.238978   49987 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 19:49:01.239032   49987 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 19:49:01.277808   49987 cri.go:89] found id: ""
	I0906 19:49:01.277836   49987 logs.go:276] 0 containers: []
	W0906 19:49:01.277845   49987 logs.go:278] No container was found matching "kindnet"
	I0906 19:49:01.277853   49987 logs.go:123] Gathering logs for dmesg ...
	I0906 19:49:01.277867   49987 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 19:49:01.294975   49987 logs.go:123] Gathering logs for describe nodes ...
	I0906 19:49:01.295007   49987 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 19:49:01.431737   49987 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 19:49:01.431767   49987 logs.go:123] Gathering logs for CRI-O ...
	I0906 19:49:01.431785   49987 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 19:49:01.541273   49987 logs.go:123] Gathering logs for container status ...
	I0906 19:49:01.541307   49987 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 19:49:01.589741   49987 logs.go:123] Gathering logs for kubelet ...
	I0906 19:49:01.589783   49987 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0906 19:49:01.643392   49987 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 19:49:01.643492   49987 out.go:270] * 
	* 
	W0906 19:49:01.643557   49987 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 19:49:01.643586   49987 out.go:270] * 
	* 
	W0906 19:49:01.644439   49987 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 19:49:01.706816   49987 out.go:201] 
	W0906 19:49:01.808519   49987 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 19:49:01.808626   49987 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 19:49:01.808705   49987 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 19:49:01.853812   49987 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-959423 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-959423
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-959423: (6.458468842s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-959423 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-959423 status --format={{.Host}}: exit status 7 (64.062243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-959423 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-959423 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.717633832s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-959423 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-959423 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-959423 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (75.794124ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-959423] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-959423
	    minikube start -p kubernetes-upgrade-959423 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9594232 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-959423 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-959423 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-959423 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m26.141842196s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-06 19:51:44.467314387 +0000 UTC m=+4950.957067916
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-959423 -n kubernetes-upgrade-959423
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-959423 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-959423 logs -n 25: (1.734986587s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-098096             | minikube                  | jenkins | v1.26.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:48 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-944227 sudo           | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-944227                | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:47 UTC |
	| start   | -p NoKubernetes-944227                | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:48 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-952957             | running-upgrade-952957    | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:47 UTC |
	| start   | -p force-systemd-flag-689823          | force-systemd-flag-689823 | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:48 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-944227 sudo           | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-944227                | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	| start   | -p cert-expiration-097103             | cert-expiration-097103    | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:49 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-098096 stop           | minikube                  | jenkins | v1.26.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	| start   | -p stopped-upgrade-098096             | stopped-upgrade-098096    | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:49 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-689823 ssh cat     | force-systemd-flag-689823 | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-689823          | force-systemd-flag-689823 | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	| start   | -p cert-options-417185                | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:49 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	| start   | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:50 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-098096             | stopped-upgrade-098096    | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	| start   | -p pause-306799 --memory=2048         | pause-306799              | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:50 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-417185 ssh               | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-417185 -- sudo        | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-417185                | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:50 UTC |
	| start   | -p auto-603826 --memory=3072          | auto-603826               | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC | 06 Sep 24 19:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-306799                       | pause-306799              | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 19:50:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:50:57.650137   57714 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:50:57.650254   57714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:50:57.650265   57714 out.go:358] Setting ErrFile to fd 2...
	I0906 19:50:57.650271   57714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:50:57.650463   57714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:50:57.650993   57714 out.go:352] Setting JSON to false
	I0906 19:50:57.651936   57714 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5607,"bootTime":1725646651,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:50:57.651997   57714 start.go:139] virtualization: kvm guest
	I0906 19:50:57.654217   57714 out.go:177] * [pause-306799] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:50:57.655384   57714 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:50:57.655429   57714 notify.go:220] Checking for updates...
	I0906 19:50:57.657955   57714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:50:57.659318   57714 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:50:57.660548   57714 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:50:57.661694   57714 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:50:57.662774   57714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:50:57.664218   57714 config.go:182] Loaded profile config "pause-306799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:50:57.664679   57714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:50:57.664727   57714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:50:57.682235   57714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37073
	I0906 19:50:57.682714   57714 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:50:57.683301   57714 main.go:141] libmachine: Using API Version  1
	I0906 19:50:57.683346   57714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:50:57.683769   57714 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:50:57.683971   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:50:57.684280   57714 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:50:57.684734   57714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:50:57.684782   57714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:50:57.699914   57714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0906 19:50:57.700369   57714 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:50:57.700925   57714 main.go:141] libmachine: Using API Version  1
	I0906 19:50:57.700954   57714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:50:57.701284   57714 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:50:57.701523   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:50:57.758615   57714 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 19:50:57.791637   57714 start.go:297] selected driver: kvm2
	I0906 19:50:57.791664   57714 start.go:901] validating driver "kvm2" against &{Name:pause-306799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:50:57.791831   57714 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:50:57.792194   57714 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:50:57.792288   57714 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:50:57.808705   57714 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:50:57.809475   57714 cni.go:84] Creating CNI manager for ""
	I0906 19:50:57.809492   57714 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:50:57.809566   57714 start.go:340] cluster config:
	{Name:pause-306799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false stor
age-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:50:57.809713   57714 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:50:57.856274   57714 out.go:177] * Starting "pause-306799" primary control-plane node in "pause-306799" cluster
	I0906 19:50:53.418147   57350 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:50:53.418196   57350 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:50:53.418242   57350 buildroot.go:174] setting up certificates
	I0906 19:50:53.418259   57350 provision.go:84] configureAuth start
	I0906 19:50:53.418285   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetMachineName
	I0906 19:50:53.418538   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetIP
	I0906 19:50:53.421117   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:50:53.421465   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:49:52 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:50:53.421490   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:50:53.421633   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:50:53.424017   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:50:53.424380   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:49:52 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:50:53.424411   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:50:53.424557   57350 provision.go:143] copyHostCerts
	I0906 19:50:53.424616   57350 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:50:53.424633   57350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:50:53.424689   57350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:50:53.424784   57350 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:50:53.424791   57350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:50:53.424814   57350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:50:53.424904   57350 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:50:53.424913   57350 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:50:53.424939   57350 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:50:53.425012   57350 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-959423 san=[127.0.0.1 192.168.39.27 kubernetes-upgrade-959423 localhost minikube]
	I0906 19:50:53.851872   57350 provision.go:177] copyRemoteCerts
	I0906 19:50:53.851929   57350 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:50:53.851954   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:50:53.854706   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:50:53.855156   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:49:52 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:50:53.855191   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:50:53.855399   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:50:53.855620   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:50:53.855793   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:50:53.855950   57350 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa Username:docker}
	I0906 19:50:53.944631   57350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:50:53.971356   57350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0906 19:50:53.998002   57350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 19:50:54.029390   57350 provision.go:87] duration metric: took 611.117017ms to configureAuth
	I0906 19:50:54.029418   57350 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:50:54.029635   57350 config.go:182] Loaded profile config "kubernetes-upgrade-959423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:50:54.029718   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:50:54.032503   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:50:54.032945   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:49:52 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:50:54.032978   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:50:54.033155   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:50:54.033340   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:50:54.033539   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:50:54.033677   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:50:54.033830   57350 main.go:141] libmachine: Using SSH client type: native
	I0906 19:50:54.034046   57350 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0906 19:50:54.034069   57350 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:50:55.969485   57042 crio.go:462] duration metric: took 1.404962813s to copy over tarball
	I0906 19:50:55.969559   57042 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 19:50:58.275036   57042 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.305447517s)
	I0906 19:50:58.275065   57042 crio.go:469] duration metric: took 2.305552451s to extract the tarball
	I0906 19:50:58.275072   57042 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 19:50:58.313139   57042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:50:58.355168   57042 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:50:58.355194   57042 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:50:58.355202   57042 kubeadm.go:934] updating node { 192.168.72.144 8443 v1.31.0 crio true true} ...
	I0906 19:50:58.355311   57042 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-603826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:auto-603826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:50:58.355380   57042 ssh_runner.go:195] Run: crio config
	I0906 19:50:58.404093   57042 cni.go:84] Creating CNI manager for ""
	I0906 19:50:58.404125   57042 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:50:58.404142   57042 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:50:58.404174   57042 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.144 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-603826 NodeName:auto-603826 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:50:58.404363   57042 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-603826"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:50:58.404433   57042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 19:50:58.415189   57042 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:50:58.415263   57042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 19:50:58.425500   57042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0906 19:50:58.443734   57042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:50:58.460273   57042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0906 19:50:58.478151   57042 ssh_runner.go:195] Run: grep 192.168.72.144	control-plane.minikube.internal$ /etc/hosts
	I0906 19:50:58.482018   57042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 19:50:58.494558   57042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:50:58.619705   57042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:50:58.638096   57042 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826 for IP: 192.168.72.144
	I0906 19:50:58.638128   57042 certs.go:194] generating shared ca certs ...
	I0906 19:50:58.638148   57042 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:50:58.638289   57042 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:50:58.638331   57042 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:50:58.638340   57042 certs.go:256] generating profile certs ...
	I0906 19:50:58.638390   57042 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.key
	I0906 19:50:58.638403   57042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt with IP's: []
	I0906 19:50:58.807213   57042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt ...
	I0906 19:50:58.807243   57042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: {Name:mk9f703d4262bc78d510dbb287c47da70c4ba33c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:50:58.807428   57042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.key ...
	I0906 19:50:58.807444   57042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.key: {Name:mk2f4579a2b630f84449f41cd06a7d6938d931b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:50:58.807550   57042 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.key.a1fbd2a5
	I0906 19:50:58.807566   57042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.crt.a1fbd2a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.144]
	I0906 19:50:59.071288   57042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.crt.a1fbd2a5 ...
	I0906 19:50:59.071317   57042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.crt.a1fbd2a5: {Name:mk64354cf90d0e228d6b40bbdbbbae0564822315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:50:59.071494   57042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.key.a1fbd2a5 ...
	I0906 19:50:59.071517   57042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.key.a1fbd2a5: {Name:mkd78e833ef38bc3316c704236910f5939baf920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:50:59.071616   57042 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.crt.a1fbd2a5 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.crt
	I0906 19:50:59.071717   57042 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.key.a1fbd2a5 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.key
	I0906 19:50:59.071775   57042 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/proxy-client.key
	I0906 19:50:59.071790   57042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/proxy-client.crt with IP's: []
	I0906 19:50:59.181263   57042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/proxy-client.crt ...
	I0906 19:50:59.181294   57042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/proxy-client.crt: {Name:mk327cc2205fbecfd201ff3a08d262ef29f6651e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:50:59.181454   57042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/proxy-client.key ...
	I0906 19:50:59.181464   57042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/proxy-client.key: {Name:mk644bd840b43feca08964f8fbe5ce8125cc78ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:50:59.181639   57042 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:50:59.181676   57042 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:50:59.181685   57042 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:50:59.181707   57042 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:50:59.181730   57042 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:50:59.181755   57042 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:50:59.181791   57042 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:50:59.182370   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:50:59.210758   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:50:59.237852   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:50:59.263407   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:50:59.291582   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0906 19:50:59.319712   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 19:50:59.360828   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:50:59.426809   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 19:50:59.454097   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:50:59.480104   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:50:59.504943   57042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:50:59.531860   57042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:50:59.548380   57042 ssh_runner.go:195] Run: openssl version
	I0906 19:50:59.554429   57042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:50:59.566134   57042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:50:59.572471   57042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:50:59.572538   57042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:50:59.580801   57042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:50:59.593377   57042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:50:59.604446   57042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:50:59.608904   57042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:50:59.608955   57042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:50:59.614786   57042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:50:59.625219   57042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:50:59.637381   57042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:50:59.642184   57042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:50:59.642254   57042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:50:59.648392   57042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:50:59.659345   57042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:50:59.664257   57042 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 19:50:59.664316   57042 kubeadm.go:392] StartCluster: {Name:auto-603826 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-603826 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.144 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:50:59.664410   57042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:50:59.664481   57042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:50:59.706024   57042 cri.go:89] found id: ""
	I0906 19:50:59.706103   57042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 19:50:59.719595   57042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 19:50:59.730347   57042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 19:50:59.739757   57042 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 19:50:59.739775   57042 kubeadm.go:157] found existing configuration files:
	
	I0906 19:50:59.739814   57042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 19:50:59.748524   57042 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 19:50:59.748572   57042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 19:50:59.758039   57042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 19:50:59.766949   57042 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 19:50:59.767028   57042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 19:50:59.776850   57042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 19:50:59.786079   57042 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 19:50:59.786139   57042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 19:50:59.795347   57042 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 19:50:59.803787   57042 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 19:50:59.803841   57042 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 19:50:59.814143   57042 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 19:50:59.874605   57042 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 19:50:59.874671   57042 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 19:50:59.994483   57042 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 19:50:59.994618   57042 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 19:50:59.994748   57042 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 19:51:00.005935   57042 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 19:51:00.153910   57042 out.go:235]   - Generating certificates and keys ...
	I0906 19:51:00.154028   57042 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 19:51:00.154117   57042 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 19:51:00.173876   57042 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 19:51:00.366888   57042 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 19:50:57.870082   57714 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:50:57.870173   57714 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:50:57.870189   57714 cache.go:56] Caching tarball of preloaded images
	I0906 19:50:57.870303   57714 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:50:57.870334   57714 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:50:57.870487   57714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/config.json ...
	I0906 19:50:57.870691   57714 start.go:360] acquireMachinesLock for pause-306799: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:51:01.217769   57714 start.go:364] duration metric: took 3.347048125s to acquireMachinesLock for "pause-306799"
	I0906 19:51:01.217822   57714 start.go:96] Skipping create...Using existing machine configuration
	I0906 19:51:01.217846   57714 fix.go:54] fixHost starting: 
	I0906 19:51:01.218271   57714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:51:01.218332   57714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:51:01.238626   57714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45061
	I0906 19:51:01.239102   57714 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:51:01.239605   57714 main.go:141] libmachine: Using API Version  1
	I0906 19:51:01.239628   57714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:51:01.239942   57714 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:51:01.240133   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:01.240289   57714 main.go:141] libmachine: (pause-306799) Calling .GetState
	I0906 19:51:01.241885   57714 fix.go:112] recreateIfNeeded on pause-306799: state=Running err=<nil>
	W0906 19:51:01.241903   57714 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 19:51:01.243569   57714 out.go:177] * Updating the running kvm2 "pause-306799" VM ...
	I0906 19:51:01.244869   57714 machine.go:93] provisionDockerMachine start ...
	I0906 19:51:01.244892   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:01.245074   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.247985   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.248372   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.248395   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.248561   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.248713   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.248888   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.249040   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.249210   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:01.249443   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:01.249461   57714 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 19:51:01.350472   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-306799
	
	I0906 19:51:01.350522   57714 main.go:141] libmachine: (pause-306799) Calling .GetMachineName
	I0906 19:51:01.350911   57714 buildroot.go:166] provisioning hostname "pause-306799"
	I0906 19:51:01.350941   57714 main.go:141] libmachine: (pause-306799) Calling .GetMachineName
	I0906 19:51:01.351175   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.354073   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.354570   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.354599   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.354793   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.354996   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.355154   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.355280   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.355438   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:01.355658   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:01.355676   57714 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-306799 && echo "pause-306799" | sudo tee /etc/hostname
	I0906 19:51:01.476743   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-306799
	
	I0906 19:51:01.476766   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.479455   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.479739   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.479766   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.479930   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.480151   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.480312   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.480467   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.480635   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:01.480824   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:01.480840   57714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-306799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-306799/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-306799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:51:01.583236   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:51:01.583282   57714 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:51:01.583306   57714 buildroot.go:174] setting up certificates
	I0906 19:51:01.583317   57714 provision.go:84] configureAuth start
	I0906 19:51:01.583340   57714 main.go:141] libmachine: (pause-306799) Calling .GetMachineName
	I0906 19:51:01.583621   57714 main.go:141] libmachine: (pause-306799) Calling .GetIP
	I0906 19:51:01.586914   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.587209   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.587282   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.587587   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.590362   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.590865   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.590888   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.591104   57714 provision.go:143] copyHostCerts
	I0906 19:51:01.591169   57714 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:51:01.591190   57714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:51:01.591266   57714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:51:01.591408   57714 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:51:01.591422   57714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:51:01.591471   57714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:51:01.591577   57714 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:51:01.591590   57714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:51:01.591623   57714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:51:01.591735   57714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.pause-306799 san=[127.0.0.1 192.168.50.125 localhost minikube pause-306799]
	I0906 19:51:01.687734   57714 provision.go:177] copyRemoteCerts
	I0906 19:51:01.687804   57714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:51:01.687833   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.690731   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.691126   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.691151   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.691423   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.691649   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.691842   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.691990   57714 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/pause-306799/id_rsa Username:docker}
	I0906 19:51:01.780081   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:51:01.813165   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0906 19:51:01.843688   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 19:51:01.872315   57714 provision.go:87] duration metric: took 288.978668ms to configureAuth
	I0906 19:51:01.872346   57714 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:51:01.872615   57714 config.go:182] Loaded profile config "pause-306799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:51:01.872713   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.876306   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.876758   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.876815   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.877002   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.877202   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.877403   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.877548   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.877766   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:01.877991   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:01.878017   57714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:51:00.530695   57042 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 19:51:00.652000   57042 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 19:51:00.747397   57042 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 19:51:00.748033   57042 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-603826 localhost] and IPs [192.168.72.144 127.0.0.1 ::1]
	I0906 19:51:01.073495   57042 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 19:51:01.073905   57042 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-603826 localhost] and IPs [192.168.72.144 127.0.0.1 ::1]
	I0906 19:51:01.195201   57042 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 19:51:01.352013   57042 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 19:51:01.450495   57042 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 19:51:01.451122   57042 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 19:51:01.657738   57042 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 19:51:01.799646   57042 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 19:51:01.948933   57042 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 19:51:02.476957   57042 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 19:51:02.748427   57042 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 19:51:02.749305   57042 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 19:51:02.754590   57042 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 19:51:00.963847   57350 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:51:00.963888   57350 machine.go:96] duration metric: took 7.914728436s to provisionDockerMachine
	I0906 19:51:00.963915   57350 start.go:293] postStartSetup for "kubernetes-upgrade-959423" (driver="kvm2")
	I0906 19:51:00.963931   57350 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:51:00.963960   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:51:00.964277   57350 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:51:00.964310   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:51:00.967429   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:00.967933   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:49:52 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:51:00.967965   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:00.968138   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:51:00.968321   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:51:00.968487   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:51:00.968676   57350 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa Username:docker}
	I0906 19:51:01.057715   57350 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:51:01.062397   57350 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:51:01.062427   57350 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:51:01.062518   57350 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:51:01.062619   57350 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:51:01.062730   57350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:51:01.073631   57350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:51:01.102165   57350 start.go:296] duration metric: took 138.234287ms for postStartSetup
	I0906 19:51:01.102204   57350 fix.go:56] duration metric: took 8.080413306s for fixHost
	I0906 19:51:01.102229   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:51:01.105187   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:01.105614   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:49:52 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:51:01.105645   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:01.105830   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:51:01.106041   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:51:01.106217   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:51:01.106387   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:51:01.106550   57350 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:01.106766   57350 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0906 19:51:01.106780   57350 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:51:01.217578   57350 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725652261.205693612
	
	I0906 19:51:01.217604   57350 fix.go:216] guest clock: 1725652261.205693612
	I0906 19:51:01.217614   57350 fix.go:229] Guest: 2024-09-06 19:51:01.205693612 +0000 UTC Remote: 2024-09-06 19:51:01.102209095 +0000 UTC m=+42.773268932 (delta=103.484517ms)
	I0906 19:51:01.217654   57350 fix.go:200] guest clock delta is within tolerance: 103.484517ms
	I0906 19:51:01.217665   57350 start.go:83] releasing machines lock for "kubernetes-upgrade-959423", held for 8.195906761s
	I0906 19:51:01.217710   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:51:01.218008   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetIP
	I0906 19:51:01.221111   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:01.221520   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:49:52 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:51:01.221550   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:01.221737   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:51:01.222349   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:51:01.222510   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .DriverName
	I0906 19:51:01.222585   57350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:51:01.222627   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:51:01.222731   57350 ssh_runner.go:195] Run: cat /version.json
	I0906 19:51:01.222748   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHHostname
	I0906 19:51:01.225542   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:01.225805   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:01.225999   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:49:52 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:51:01.226031   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:01.226142   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:7b:80", ip: ""} in network mk-kubernetes-upgrade-959423: {Iface:virbr1 ExpiryTime:2024-09-06 20:49:52 +0000 UTC Type:0 Mac:52:54:00:42:7b:80 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:kubernetes-upgrade-959423 Clientid:01:52:54:00:42:7b:80}
	I0906 19:51:01.226166   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) DBG | domain kubernetes-upgrade-959423 has defined IP address 192.168.39.27 and MAC address 52:54:00:42:7b:80 in network mk-kubernetes-upgrade-959423
	I0906 19:51:01.226315   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:51:01.226438   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHPort
	I0906 19:51:01.226534   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:51:01.226604   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHKeyPath
	I0906 19:51:01.226695   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:51:01.226739   57350 main.go:141] libmachine: (kubernetes-upgrade-959423) Calling .GetSSHUsername
	I0906 19:51:01.226800   57350 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa Username:docker}
	I0906 19:51:01.226934   57350 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/kubernetes-upgrade-959423/id_rsa Username:docker}
	I0906 19:51:01.332522   57350 ssh_runner.go:195] Run: systemctl --version
	I0906 19:51:01.340883   57350 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:51:01.505460   57350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:51:01.512452   57350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:51:01.512522   57350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:51:01.528269   57350 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 19:51:01.528291   57350 start.go:495] detecting cgroup driver to use...
	I0906 19:51:01.528363   57350 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:51:01.705162   57350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:51:01.781616   57350 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:51:01.781676   57350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:51:01.957154   57350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:51:02.067602   57350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:51:02.399671   57350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:51:02.745557   57350 docker.go:233] disabling docker service ...
	I0906 19:51:02.745635   57350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:51:02.881012   57350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:51:02.982301   57350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:51:03.287363   57350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:51:02.756496   57042 out.go:235]   - Booting up control plane ...
	I0906 19:51:02.756611   57042 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 19:51:02.756713   57042 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 19:51:02.756805   57042 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 19:51:02.772816   57042 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 19:51:02.778824   57042 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 19:51:02.778916   57042 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 19:51:02.925950   57042 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 19:51:02.926096   57042 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 19:51:03.926039   57042 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000972432s
	I0906 19:51:03.926158   57042 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 19:51:07.417181   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:51:07.417212   57714 machine.go:96] duration metric: took 6.172326991s to provisionDockerMachine
	I0906 19:51:07.417225   57714 start.go:293] postStartSetup for "pause-306799" (driver="kvm2")
	I0906 19:51:07.417238   57714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:51:07.417267   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.417590   57714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:51:07.417621   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:07.420555   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.420936   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.420965   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.421127   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:07.421302   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.421446   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:07.421607   57714 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/pause-306799/id_rsa Username:docker}
	I0906 19:51:07.499466   57714 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:51:07.503831   57714 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:51:07.503856   57714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:51:07.503926   57714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:51:07.504019   57714 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:51:07.504144   57714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:51:07.513654   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:51:07.539136   57714 start.go:296] duration metric: took 121.896794ms for postStartSetup
	I0906 19:51:07.539183   57714 fix.go:56] duration metric: took 6.321349013s for fixHost
	I0906 19:51:07.539208   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:07.541515   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.541795   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.541827   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.541942   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:07.542124   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.542288   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.542426   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:07.542617   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:07.542766   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:07.542776   57714 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:51:07.641900   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725652267.632838814
	
	I0906 19:51:07.641923   57714 fix.go:216] guest clock: 1725652267.632838814
	I0906 19:51:07.641930   57714 fix.go:229] Guest: 2024-09-06 19:51:07.632838814 +0000 UTC Remote: 2024-09-06 19:51:07.539188931 +0000 UTC m=+9.926937901 (delta=93.649883ms)
	I0906 19:51:07.641951   57714 fix.go:200] guest clock delta is within tolerance: 93.649883ms
	I0906 19:51:07.641957   57714 start.go:83] releasing machines lock for "pause-306799", held for 6.424160144s
	I0906 19:51:07.641980   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.642217   57714 main.go:141] libmachine: (pause-306799) Calling .GetIP
	I0906 19:51:07.644942   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.645310   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.645350   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.645500   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.646042   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.646222   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.646303   57714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:51:07.646339   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:07.646445   57714 ssh_runner.go:195] Run: cat /version.json
	I0906 19:51:07.646464   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:07.649162   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.649347   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.649600   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.649625   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.649766   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:07.649890   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.649914   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.649938   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.650067   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:07.650121   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:03.536035   57350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:51:03.574119   57350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:51:03.610982   57350 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 19:51:03.611051   57350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:03.627490   57350 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:51:03.627565   57350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:03.641690   57350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:03.656716   57350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:03.668839   57350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:51:03.681260   57350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:03.697627   57350 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:03.722917   57350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:03.742415   57350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:51:03.753594   57350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:51:03.784826   57350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:51:04.028849   57350 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:51:08.926184   57042 kubeadm.go:310] [api-check] The API server is healthy after 5.002924496s
	I0906 19:51:08.944710   57042 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 19:51:08.972959   57042 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 19:51:09.009112   57042 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 19:51:09.009379   57042 kubeadm.go:310] [mark-control-plane] Marking the node auto-603826 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 19:51:09.024309   57042 kubeadm.go:310] [bootstrap-token] Using token: t0zfmb.d9o1c1ghhz7zewmg
	I0906 19:51:07.650216   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.650297   57714 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/pause-306799/id_rsa Username:docker}
	I0906 19:51:07.650475   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:07.650626   57714 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/pause-306799/id_rsa Username:docker}
	I0906 19:51:07.722431   57714 ssh_runner.go:195] Run: systemctl --version
	I0906 19:51:07.745593   57714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:51:07.904565   57714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:51:07.912796   57714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:51:07.912878   57714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:51:07.922631   57714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 19:51:07.922656   57714 start.go:495] detecting cgroup driver to use...
	I0906 19:51:07.922725   57714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:51:07.940234   57714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:51:07.955832   57714 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:51:07.955910   57714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:51:07.971533   57714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:51:07.987816   57714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:51:08.135894   57714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:51:08.278766   57714 docker.go:233] disabling docker service ...
	I0906 19:51:08.278855   57714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:51:08.302182   57714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:51:08.319614   57714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:51:08.468084   57714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:51:08.612332   57714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:51:08.629590   57714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:51:08.654792   57714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 19:51:08.654868   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.671184   57714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:51:08.671253   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.682935   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.698941   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.713568   57714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:51:08.725718   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.737928   57714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.751620   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.763786   57714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:51:08.774548   57714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:51:08.785199   57714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:51:08.934698   57714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:51:09.146920   57714 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:51:09.146992   57714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:51:09.153221   57714 start.go:563] Will wait 60s for crictl version
	I0906 19:51:09.153289   57714 ssh_runner.go:195] Run: which crictl
	I0906 19:51:09.157255   57714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:51:09.194447   57714 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:51:09.194541   57714 ssh_runner.go:195] Run: crio --version
	I0906 19:51:09.226208   57714 ssh_runner.go:195] Run: crio --version
	I0906 19:51:09.258357   57714 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 19:51:09.025839   57042 out.go:235]   - Configuring RBAC rules ...
	I0906 19:51:09.026009   57042 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 19:51:09.034838   57042 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 19:51:09.048472   57042 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 19:51:09.052253   57042 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 19:51:09.061844   57042 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 19:51:09.075238   57042 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 19:51:09.337488   57042 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 19:51:09.779148   57042 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 19:51:10.337505   57042 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 19:51:10.337535   57042 kubeadm.go:310] 
	I0906 19:51:10.337606   57042 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 19:51:10.337617   57042 kubeadm.go:310] 
	I0906 19:51:10.337762   57042 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 19:51:10.337793   57042 kubeadm.go:310] 
	I0906 19:51:10.337858   57042 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 19:51:10.337945   57042 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 19:51:10.338035   57042 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 19:51:10.338049   57042 kubeadm.go:310] 
	I0906 19:51:10.338129   57042 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 19:51:10.338152   57042 kubeadm.go:310] 
	I0906 19:51:10.338214   57042 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 19:51:10.338232   57042 kubeadm.go:310] 
	I0906 19:51:10.338307   57042 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 19:51:10.338403   57042 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 19:51:10.338491   57042 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 19:51:10.338507   57042 kubeadm.go:310] 
	I0906 19:51:10.338604   57042 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 19:51:10.338696   57042 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 19:51:10.338704   57042 kubeadm.go:310] 
	I0906 19:51:10.338824   57042 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token t0zfmb.d9o1c1ghhz7zewmg \
	I0906 19:51:10.338955   57042 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 19:51:10.338984   57042 kubeadm.go:310] 	--control-plane 
	I0906 19:51:10.338994   57042 kubeadm.go:310] 
	I0906 19:51:10.339105   57042 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 19:51:10.339116   57042 kubeadm.go:310] 
	I0906 19:51:10.339254   57042 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token t0zfmb.d9o1c1ghhz7zewmg \
	I0906 19:51:10.339390   57042 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 19:51:10.340506   57042 kubeadm.go:310] W0906 19:50:59.854364     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 19:51:10.340938   57042 kubeadm.go:310] W0906 19:50:59.855293     831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 19:51:10.341072   57042 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 19:51:10.341105   57042 cni.go:84] Creating CNI manager for ""
	I0906 19:51:10.341121   57042 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:51:10.342949   57042 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 19:51:10.344173   57042 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 19:51:10.355643   57042 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 19:51:10.377244   57042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 19:51:10.377316   57042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 19:51:10.377347   57042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-603826 minikube.k8s.io/updated_at=2024_09_06T19_51_10_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=auto-603826 minikube.k8s.io/primary=true
	I0906 19:51:09.259500   57714 main.go:141] libmachine: (pause-306799) Calling .GetIP
	I0906 19:51:09.261915   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:09.262218   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:09.262244   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:09.262515   57714 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0906 19:51:09.266957   57714 kubeadm.go:883] updating cluster {Name:pause-306799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:51:09.267123   57714 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:51:09.267166   57714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:51:09.312254   57714 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:51:09.312279   57714 crio.go:433] Images already preloaded, skipping extraction
	I0906 19:51:09.312331   57714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:51:09.347222   57714 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:51:09.347243   57714 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:51:09.347251   57714 kubeadm.go:934] updating node { 192.168.50.125 8443 v1.31.0 crio true true} ...
	I0906 19:51:09.347378   57714 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-306799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:51:09.347462   57714 ssh_runner.go:195] Run: crio config
	I0906 19:51:09.401605   57714 cni.go:84] Creating CNI manager for ""
	I0906 19:51:09.401636   57714 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:51:09.401659   57714 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:51:09.401686   57714 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-306799 NodeName:pause-306799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:51:09.401894   57714 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-306799"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:51:09.401980   57714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 19:51:09.412737   57714 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:51:09.412810   57714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 19:51:09.423404   57714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 19:51:09.440696   57714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:51:09.457829   57714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0906 19:51:09.475345   57714 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0906 19:51:09.480454   57714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:51:09.616433   57714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:51:09.635496   57714 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799 for IP: 192.168.50.125
	I0906 19:51:09.635529   57714 certs.go:194] generating shared ca certs ...
	I0906 19:51:09.635543   57714 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:51:09.635713   57714 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:51:09.635775   57714 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:51:09.635789   57714 certs.go:256] generating profile certs ...
	I0906 19:51:09.635910   57714 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/client.key
	I0906 19:51:09.636012   57714 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/apiserver.key.246d0d9a
	I0906 19:51:09.636067   57714 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/proxy-client.key
	I0906 19:51:09.636231   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:51:09.636268   57714 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:51:09.636282   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:51:09.636317   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:51:09.636350   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:51:09.636386   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:51:09.636441   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:51:09.637168   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:51:09.673319   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:51:09.706687   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:51:09.736192   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:51:09.762158   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 19:51:09.872346   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 19:51:10.076308   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:51:10.373308   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 19:51:10.443189   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:51:10.572596   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:51:10.661517   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:51:10.732243   57714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:51:10.780383   57714 ssh_runner.go:195] Run: openssl version
	I0906 19:51:10.798643   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:51:10.819673   57714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:51:10.868596   57714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:51:10.868666   57714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:51:10.882897   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:51:10.917348   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:51:10.939430   57714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:51:10.946755   57714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:51:10.946823   57714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:51:10.960405   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:51:10.976140   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:51:10.989311   57714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:51:10.994767   57714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:51:10.994835   57714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:51:11.003892   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:51:11.017644   57714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:51:11.023799   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 19:51:11.032735   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 19:51:11.043673   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 19:51:11.053791   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 19:51:11.067159   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 19:51:11.075891   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 19:51:11.088507   57714 kubeadm.go:392] StartCluster: {Name:pause-306799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:51:11.088630   57714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:51:11.088701   57714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:51:11.193474   57714 cri.go:89] found id: "2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe"
	I0906 19:51:11.193500   57714 cri.go:89] found id: "38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40"
	I0906 19:51:11.193504   57714 cri.go:89] found id: "b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561"
	I0906 19:51:11.193506   57714 cri.go:89] found id: "2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec"
	I0906 19:51:11.193509   57714 cri.go:89] found id: "2b9420a0e24963243b49fbda11aa9d8db70d95e55008ace0561a42167942aa14"
	I0906 19:51:11.193513   57714 cri.go:89] found id: "82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2"
	I0906 19:51:11.193515   57714 cri.go:89] found id: "91d9c7eee276948bf6e7a2cc7d2d970c7e2b2fd64d59122dc890dc4ec18a873e"
	I0906 19:51:11.193517   57714 cri.go:89] found id: "cd5f9fd41f5323bb23d32dbd7a4868e2a9e1073672e71a8de72ce6bd2720f1b4"
	I0906 19:51:11.193520   57714 cri.go:89] found id: "3f8d268b6a59eb50dfa1964841fd8387806e78a3ff267d28abafe1872b4f9c3d"
	I0906 19:51:11.193526   57714 cri.go:89] found id: "6193e6f4141c98c97a9fc5e4b80236700c909fdecff57ecaa2f1a9523fb25d50"
	I0906 19:51:11.193528   57714 cri.go:89] found id: ""
	I0906 19:51:11.193571   57714 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.237189201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652305237163434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ed5bcba-0718-41fe-85e5-5f4935a4841f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.238084030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bce4ca39-cd80-40d6-a611-bef069263be5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.238165343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bce4ca39-cd80-40d6-a611-bef069263be5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.240321317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc55c0a701f02ece001740a446fe1447293395738453797928479f3dbc509ce,PodSandboxId:d9ee7c3daebb3d0146a5170b16bf027fe35c22f3460080a519dc647c43bf447d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652302043380057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r5ndr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f17dee-5b86-4c85-8bff-67cbcb71d003,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f2183a2ce543990eb1c441662155a8efd7ee868b91b51056b62d0fb4c2692e,PodSandboxId:4559dc373637941416eadd9450cf40203a0c4057600cd0807a2520f62e051bbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652302048853010,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pd2nh,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0dcbcefa-ff09-44c0-8908-d424ae0b922d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595dae767a70b52ce1dc0d2b18062690a58c8b76b93dbbffc0c4d10ea525680a,PodSandboxId:8199966d58fe40a9bf1cca208d34e8a30cfdd7a943334c8505989c9cdbb9cd65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CON
TAINER_RUNNING,CreatedAt:1725652298979321833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4639354c05525c9d8db7f080e3dfd697,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2cf131e8ce847ea91db91d74f31bf12d781b0be060cc886b54469c5fdfb7fe,PodSandboxId:2dbe7ce3a39c025dacaa7a4e1a7f2f2f489a602b9e46729135fa6d12069416b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RU
NNING,CreatedAt:1725652276536539348,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvzqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aadd6e4-96ac-44ab-9291-74395e459e0d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7b273b6ce7a05f2c36a2f9ad86f6dfceec2ae118f6b597549570cc0fbad606,PodSandboxId:71cba863090ebc86f9010d0fe66acd4ce64c7435b54caca0edb7abd006367562,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172565227
6626789073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7595906a-7f0a-472b-9b25-9ed8358a5e19,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ef040d638b01e160cca3f51a534686779a23b8d8a1cfb11ff0f0b4e0a7ca5e,PodSandboxId:3c873a7a08997a03eb19f06a0f820246b6f58edfc970a7024002ca0e40729ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652276581770520,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6fa960bd840967fb60bf3e6b714fcb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a273bb32ec7a5979b2f636f5d98d82dfa6b30a987e85929e5f8754c64fade17,PodSandboxId:cba3d718363a0d84318365104a76aa2bc31f0a90937504397bd9eec2a8b40e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652276573400675,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed488d05934c318dc782c3cd9b27b175,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5232c1d4f851ae77c04745b0c85c2ff160ba1e031642b3a8a64cc1cd5305779a,PodSandboxId:63f09472dbb55aaed96f6e94ed8ea9d1cb55fb6db304f2bab97f0d26e93e88fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652276428470507
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70eeeabe5246deebd63c14e48999a4b4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfde39dec30436cb7f33f6fb3f8b9f270da79428f1982c86e925e84a8e5028e8,PodSandboxId:8199966d58fe40a9bf1cca208d34e8a30cfdd7a943334c8505989c9cdbb9cd65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652276335512300,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4639354c05525c9d8db7f080e3dfd697,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff3e68be48f9599020f82e1340bbc79259a42ab5512d5fdb3a0b7fc7a67a9b,PodSandboxId:25e0a6f2b464ace0793a3f81cef36877e714243d5bae063ddbc2860b825023a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652263464324568,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r5ndr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f17dee-5b86-4c85-8bff-67cbcb71d003,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:658e8daeca008aa1eb3f711da053d9e7b2074bcfa5e0da1d3d5111c936c71c5f,PodSandboxId:7e9e54e5217fd2fd2bfef4f985838d4210a8d67a44f8bfa8173a2a38128c0e44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652263208656722,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pd2nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcbcefa-ff09-44c0-8908-d424ae0b922d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e943e13d127832f5e5666b3e1c0b743669f74096981d7bb78b6107e21d6d779d,PodSandboxId:61044891942a7978d29406137307517b4979d088417c08660e4f1
b4a090cd7bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652262328823319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvzqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aadd6e4-96ac-44ab-9291-74395e459e0d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da460ec11253b86af33172d47f88d4654f297cf774d8c679a0ada4e4411efdfe,PodSandboxId:5a6a906b1b0144585e5c5275b8e64f8890eeb097074c679b2fa7b91c974582bb,Metadata:&ContainerMeta
data{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652262294854576,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70eeeabe5246deebd63c14e48999a4b4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ef3791c7be9948cf7046555bfdea9d17a990f6de2f5b1fcb7d7185acc934c0,PodSandboxId:add18a78b629ecc872091e092964d0c469c3c5db632eac473876f2e6209ce0f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,}
,Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652262222097944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6fa960bd840967fb60bf3e6b714fcb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56defb60ba7ce66bf02b36d6890c130247ffd966f3c42a5c77faaebed9a68728,PodSandboxId:3ea45d939c75d799e96db354b043d6b8eda02dfca4d5f2dd2f45b43a4da59a2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652261967779507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed488d05934c318dc782c3cd9b27b175,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc78cc5a655321939aec201db522d171923aba0903cb32d86a06ff98158adc8,PodSandboxId:b6c3a1ad9a2c7d2139fc833faddb3d5a8fc45509762d63dba5cf015734566a70,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725652252519400499,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7595906a-7f0a-472b-9b25-9ed8358a5e19,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bce4ca39-cd80-40d6-a611-bef069263be5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.291912100Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07bf021f-20b2-440a-9e9d-86b2dae5a510 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.292002362Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07bf021f-20b2-440a-9e9d-86b2dae5a510 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.293409078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac033a89-a198-4166-950c-0e49390f160c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.293840108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652305293811373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac033a89-a198-4166-950c-0e49390f160c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.294764446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e151a26-ab9d-4c75-83ad-632d7bb1cfe8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.294871851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e151a26-ab9d-4c75-83ad-632d7bb1cfe8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.295193086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc55c0a701f02ece001740a446fe1447293395738453797928479f3dbc509ce,PodSandboxId:d9ee7c3daebb3d0146a5170b16bf027fe35c22f3460080a519dc647c43bf447d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652302043380057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r5ndr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f17dee-5b86-4c85-8bff-67cbcb71d003,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f2183a2ce543990eb1c441662155a8efd7ee868b91b51056b62d0fb4c2692e,PodSandboxId:4559dc373637941416eadd9450cf40203a0c4057600cd0807a2520f62e051bbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652302048853010,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pd2nh,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0dcbcefa-ff09-44c0-8908-d424ae0b922d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595dae767a70b52ce1dc0d2b18062690a58c8b76b93dbbffc0c4d10ea525680a,PodSandboxId:8199966d58fe40a9bf1cca208d34e8a30cfdd7a943334c8505989c9cdbb9cd65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CON
TAINER_RUNNING,CreatedAt:1725652298979321833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4639354c05525c9d8db7f080e3dfd697,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2cf131e8ce847ea91db91d74f31bf12d781b0be060cc886b54469c5fdfb7fe,PodSandboxId:2dbe7ce3a39c025dacaa7a4e1a7f2f2f489a602b9e46729135fa6d12069416b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RU
NNING,CreatedAt:1725652276536539348,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvzqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aadd6e4-96ac-44ab-9291-74395e459e0d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7b273b6ce7a05f2c36a2f9ad86f6dfceec2ae118f6b597549570cc0fbad606,PodSandboxId:71cba863090ebc86f9010d0fe66acd4ce64c7435b54caca0edb7abd006367562,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172565227
6626789073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7595906a-7f0a-472b-9b25-9ed8358a5e19,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ef040d638b01e160cca3f51a534686779a23b8d8a1cfb11ff0f0b4e0a7ca5e,PodSandboxId:3c873a7a08997a03eb19f06a0f820246b6f58edfc970a7024002ca0e40729ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652276581770520,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6fa960bd840967fb60bf3e6b714fcb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a273bb32ec7a5979b2f636f5d98d82dfa6b30a987e85929e5f8754c64fade17,PodSandboxId:cba3d718363a0d84318365104a76aa2bc31f0a90937504397bd9eec2a8b40e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652276573400675,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed488d05934c318dc782c3cd9b27b175,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5232c1d4f851ae77c04745b0c85c2ff160ba1e031642b3a8a64cc1cd5305779a,PodSandboxId:63f09472dbb55aaed96f6e94ed8ea9d1cb55fb6db304f2bab97f0d26e93e88fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652276428470507
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70eeeabe5246deebd63c14e48999a4b4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfde39dec30436cb7f33f6fb3f8b9f270da79428f1982c86e925e84a8e5028e8,PodSandboxId:8199966d58fe40a9bf1cca208d34e8a30cfdd7a943334c8505989c9cdbb9cd65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652276335512300,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4639354c05525c9d8db7f080e3dfd697,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff3e68be48f9599020f82e1340bbc79259a42ab5512d5fdb3a0b7fc7a67a9b,PodSandboxId:25e0a6f2b464ace0793a3f81cef36877e714243d5bae063ddbc2860b825023a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652263464324568,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r5ndr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f17dee-5b86-4c85-8bff-67cbcb71d003,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:658e8daeca008aa1eb3f711da053d9e7b2074bcfa5e0da1d3d5111c936c71c5f,PodSandboxId:7e9e54e5217fd2fd2bfef4f985838d4210a8d67a44f8bfa8173a2a38128c0e44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652263208656722,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pd2nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcbcefa-ff09-44c0-8908-d424ae0b922d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e943e13d127832f5e5666b3e1c0b743669f74096981d7bb78b6107e21d6d779d,PodSandboxId:61044891942a7978d29406137307517b4979d088417c08660e4f1
b4a090cd7bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652262328823319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvzqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aadd6e4-96ac-44ab-9291-74395e459e0d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da460ec11253b86af33172d47f88d4654f297cf774d8c679a0ada4e4411efdfe,PodSandboxId:5a6a906b1b0144585e5c5275b8e64f8890eeb097074c679b2fa7b91c974582bb,Metadata:&ContainerMeta
data{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652262294854576,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70eeeabe5246deebd63c14e48999a4b4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ef3791c7be9948cf7046555bfdea9d17a990f6de2f5b1fcb7d7185acc934c0,PodSandboxId:add18a78b629ecc872091e092964d0c469c3c5db632eac473876f2e6209ce0f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,}
,Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652262222097944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6fa960bd840967fb60bf3e6b714fcb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56defb60ba7ce66bf02b36d6890c130247ffd966f3c42a5c77faaebed9a68728,PodSandboxId:3ea45d939c75d799e96db354b043d6b8eda02dfca4d5f2dd2f45b43a4da59a2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652261967779507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed488d05934c318dc782c3cd9b27b175,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc78cc5a655321939aec201db522d171923aba0903cb32d86a06ff98158adc8,PodSandboxId:b6c3a1ad9a2c7d2139fc833faddb3d5a8fc45509762d63dba5cf015734566a70,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725652252519400499,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7595906a-7f0a-472b-9b25-9ed8358a5e19,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e151a26-ab9d-4c75-83ad-632d7bb1cfe8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.345140522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39c38bcd-37cf-485b-b17f-8fd0ba223019 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.345236051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39c38bcd-37cf-485b-b17f-8fd0ba223019 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.346771459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3ba4c8d-930a-4da0-96b3-b138611c4e2e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.347175239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652305347148909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3ba4c8d-930a-4da0-96b3-b138611c4e2e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.347720693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cab06abd-810c-446b-a837-1896ef26cac7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.347775656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cab06abd-810c-446b-a837-1896ef26cac7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.348117936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc55c0a701f02ece001740a446fe1447293395738453797928479f3dbc509ce,PodSandboxId:d9ee7c3daebb3d0146a5170b16bf027fe35c22f3460080a519dc647c43bf447d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652302043380057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r5ndr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f17dee-5b86-4c85-8bff-67cbcb71d003,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f2183a2ce543990eb1c441662155a8efd7ee868b91b51056b62d0fb4c2692e,PodSandboxId:4559dc373637941416eadd9450cf40203a0c4057600cd0807a2520f62e051bbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652302048853010,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pd2nh,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0dcbcefa-ff09-44c0-8908-d424ae0b922d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595dae767a70b52ce1dc0d2b18062690a58c8b76b93dbbffc0c4d10ea525680a,PodSandboxId:8199966d58fe40a9bf1cca208d34e8a30cfdd7a943334c8505989c9cdbb9cd65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CON
TAINER_RUNNING,CreatedAt:1725652298979321833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4639354c05525c9d8db7f080e3dfd697,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2cf131e8ce847ea91db91d74f31bf12d781b0be060cc886b54469c5fdfb7fe,PodSandboxId:2dbe7ce3a39c025dacaa7a4e1a7f2f2f489a602b9e46729135fa6d12069416b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RU
NNING,CreatedAt:1725652276536539348,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvzqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aadd6e4-96ac-44ab-9291-74395e459e0d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7b273b6ce7a05f2c36a2f9ad86f6dfceec2ae118f6b597549570cc0fbad606,PodSandboxId:71cba863090ebc86f9010d0fe66acd4ce64c7435b54caca0edb7abd006367562,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172565227
6626789073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7595906a-7f0a-472b-9b25-9ed8358a5e19,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ef040d638b01e160cca3f51a534686779a23b8d8a1cfb11ff0f0b4e0a7ca5e,PodSandboxId:3c873a7a08997a03eb19f06a0f820246b6f58edfc970a7024002ca0e40729ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652276581770520,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6fa960bd840967fb60bf3e6b714fcb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a273bb32ec7a5979b2f636f5d98d82dfa6b30a987e85929e5f8754c64fade17,PodSandboxId:cba3d718363a0d84318365104a76aa2bc31f0a90937504397bd9eec2a8b40e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652276573400675,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed488d05934c318dc782c3cd9b27b175,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5232c1d4f851ae77c04745b0c85c2ff160ba1e031642b3a8a64cc1cd5305779a,PodSandboxId:63f09472dbb55aaed96f6e94ed8ea9d1cb55fb6db304f2bab97f0d26e93e88fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652276428470507
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70eeeabe5246deebd63c14e48999a4b4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfde39dec30436cb7f33f6fb3f8b9f270da79428f1982c86e925e84a8e5028e8,PodSandboxId:8199966d58fe40a9bf1cca208d34e8a30cfdd7a943334c8505989c9cdbb9cd65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652276335512300,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4639354c05525c9d8db7f080e3dfd697,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff3e68be48f9599020f82e1340bbc79259a42ab5512d5fdb3a0b7fc7a67a9b,PodSandboxId:25e0a6f2b464ace0793a3f81cef36877e714243d5bae063ddbc2860b825023a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652263464324568,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r5ndr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f17dee-5b86-4c85-8bff-67cbcb71d003,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:658e8daeca008aa1eb3f711da053d9e7b2074bcfa5e0da1d3d5111c936c71c5f,PodSandboxId:7e9e54e5217fd2fd2bfef4f985838d4210a8d67a44f8bfa8173a2a38128c0e44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652263208656722,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pd2nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcbcefa-ff09-44c0-8908-d424ae0b922d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e943e13d127832f5e5666b3e1c0b743669f74096981d7bb78b6107e21d6d779d,PodSandboxId:61044891942a7978d29406137307517b4979d088417c08660e4f1
b4a090cd7bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652262328823319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvzqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aadd6e4-96ac-44ab-9291-74395e459e0d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da460ec11253b86af33172d47f88d4654f297cf774d8c679a0ada4e4411efdfe,PodSandboxId:5a6a906b1b0144585e5c5275b8e64f8890eeb097074c679b2fa7b91c974582bb,Metadata:&ContainerMeta
data{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652262294854576,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70eeeabe5246deebd63c14e48999a4b4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ef3791c7be9948cf7046555bfdea9d17a990f6de2f5b1fcb7d7185acc934c0,PodSandboxId:add18a78b629ecc872091e092964d0c469c3c5db632eac473876f2e6209ce0f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,}
,Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652262222097944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6fa960bd840967fb60bf3e6b714fcb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56defb60ba7ce66bf02b36d6890c130247ffd966f3c42a5c77faaebed9a68728,PodSandboxId:3ea45d939c75d799e96db354b043d6b8eda02dfca4d5f2dd2f45b43a4da59a2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652261967779507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed488d05934c318dc782c3cd9b27b175,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc78cc5a655321939aec201db522d171923aba0903cb32d86a06ff98158adc8,PodSandboxId:b6c3a1ad9a2c7d2139fc833faddb3d5a8fc45509762d63dba5cf015734566a70,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725652252519400499,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7595906a-7f0a-472b-9b25-9ed8358a5e19,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cab06abd-810c-446b-a837-1896ef26cac7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.407642753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7c90805-0d1e-4f68-bade-95db2142784d name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.407720106Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7c90805-0d1e-4f68-bade-95db2142784d name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.408958995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ecc28e6e-291b-4b67-a016-23c9cdc686de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.409360147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652305409331100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ecc28e6e-291b-4b67-a016-23c9cdc686de name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.409960783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e20f813f-a504-4e95-9e21-250f3dd3664d name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.410014556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e20f813f-a504-4e95-9e21-250f3dd3664d name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:45 kubernetes-upgrade-959423 crio[3008]: time="2024-09-06 19:51:45.410339085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2bc55c0a701f02ece001740a446fe1447293395738453797928479f3dbc509ce,PodSandboxId:d9ee7c3daebb3d0146a5170b16bf027fe35c22f3460080a519dc647c43bf447d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652302043380057,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r5ndr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f17dee-5b86-4c85-8bff-67cbcb71d003,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4f2183a2ce543990eb1c441662155a8efd7ee868b91b51056b62d0fb4c2692e,PodSandboxId:4559dc373637941416eadd9450cf40203a0c4057600cd0807a2520f62e051bbd,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652302048853010,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pd2nh,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0dcbcefa-ff09-44c0-8908-d424ae0b922d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595dae767a70b52ce1dc0d2b18062690a58c8b76b93dbbffc0c4d10ea525680a,PodSandboxId:8199966d58fe40a9bf1cca208d34e8a30cfdd7a943334c8505989c9cdbb9cd65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CON
TAINER_RUNNING,CreatedAt:1725652298979321833,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4639354c05525c9d8db7f080e3dfd697,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2cf131e8ce847ea91db91d74f31bf12d781b0be060cc886b54469c5fdfb7fe,PodSandboxId:2dbe7ce3a39c025dacaa7a4e1a7f2f2f489a602b9e46729135fa6d12069416b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RU
NNING,CreatedAt:1725652276536539348,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvzqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aadd6e4-96ac-44ab-9291-74395e459e0d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e7b273b6ce7a05f2c36a2f9ad86f6dfceec2ae118f6b597549570cc0fbad606,PodSandboxId:71cba863090ebc86f9010d0fe66acd4ce64c7435b54caca0edb7abd006367562,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172565227
6626789073,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7595906a-7f0a-472b-9b25-9ed8358a5e19,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ef040d638b01e160cca3f51a534686779a23b8d8a1cfb11ff0f0b4e0a7ca5e,PodSandboxId:3c873a7a08997a03eb19f06a0f820246b6f58edfc970a7024002ca0e40729ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652276581770520,Labels
:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6fa960bd840967fb60bf3e6b714fcb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a273bb32ec7a5979b2f636f5d98d82dfa6b30a987e85929e5f8754c64fade17,PodSandboxId:cba3d718363a0d84318365104a76aa2bc31f0a90937504397bd9eec2a8b40e4e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652276573400675,La
bels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed488d05934c318dc782c3cd9b27b175,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5232c1d4f851ae77c04745b0c85c2ff160ba1e031642b3a8a64cc1cd5305779a,PodSandboxId:63f09472dbb55aaed96f6e94ed8ea9d1cb55fb6db304f2bab97f0d26e93e88fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652276428470507
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70eeeabe5246deebd63c14e48999a4b4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfde39dec30436cb7f33f6fb3f8b9f270da79428f1982c86e925e84a8e5028e8,PodSandboxId:8199966d58fe40a9bf1cca208d34e8a30cfdd7a943334c8505989c9cdbb9cd65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652276335512300,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4639354c05525c9d8db7f080e3dfd697,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aff3e68be48f9599020f82e1340bbc79259a42ab5512d5fdb3a0b7fc7a67a9b,PodSandboxId:25e0a6f2b464ace0793a3f81cef36877e714243d5bae063ddbc2860b825023a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652263464324568,Labels:map[string]string{io.kubernetes
.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-r5ndr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18f17dee-5b86-4c85-8bff-67cbcb71d003,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:658e8daeca008aa1eb3f711da053d9e7b2074bcfa5e0da1d3d5111c936c71c5f,PodSandboxId:7e9e54e5217fd2fd2bfef4f985838d4210a8d67a44f8bfa8173a2a38128c0e44,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652263208656722,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-pd2nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dcbcefa-ff09-44c0-8908-d424ae0b922d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e943e13d127832f5e5666b3e1c0b743669f74096981d7bb78b6107e21d6d779d,PodSandboxId:61044891942a7978d29406137307517b4979d088417c08660e4f1
b4a090cd7bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652262328823319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rvzqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aadd6e4-96ac-44ab-9291-74395e459e0d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da460ec11253b86af33172d47f88d4654f297cf774d8c679a0ada4e4411efdfe,PodSandboxId:5a6a906b1b0144585e5c5275b8e64f8890eeb097074c679b2fa7b91c974582bb,Metadata:&ContainerMeta
data{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652262294854576,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70eeeabe5246deebd63c14e48999a4b4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2ef3791c7be9948cf7046555bfdea9d17a990f6de2f5b1fcb7d7185acc934c0,PodSandboxId:add18a78b629ecc872091e092964d0c469c3c5db632eac473876f2e6209ce0f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,}
,Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652262222097944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f6fa960bd840967fb60bf3e6b714fcb,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56defb60ba7ce66bf02b36d6890c130247ffd966f3c42a5c77faaebed9a68728,PodSandboxId:3ea45d939c75d799e96db354b043d6b8eda02dfca4d5f2dd2f45b43a4da59a2b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652261967779507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-959423,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed488d05934c318dc782c3cd9b27b175,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdc78cc5a655321939aec201db522d171923aba0903cb32d86a06ff98158adc8,PodSandboxId:b6c3a1ad9a2c7d2139fc833faddb3d5a8fc45509762d63dba5cf015734566a70,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1725652252519400499,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7595906a-7f0a-472b-9b25-9ed8358a5e19,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e20f813f-a504-4e95-9e21-250f3dd3664d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4f2183a2ce54       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   4559dc3736379       coredns-6f6b679f8f-pd2nh
	2bc55c0a701f0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   d9ee7c3daebb3       coredns-6f6b679f8f-r5ndr
	595dae767a70b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   6 seconds ago       Running             kube-apiserver            3                   8199966d58fe4       kube-apiserver-kubernetes-upgrade-959423
	5e7b273b6ce7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   28 seconds ago      Running             storage-provisioner       2                   71cba863090eb       storage-provisioner
	44ef040d638b0       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   28 seconds ago      Running             kube-scheduler            2                   3c873a7a08997       kube-scheduler-kubernetes-upgrade-959423
	7a273bb32ec7a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   28 seconds ago      Running             kube-controller-manager   2                   cba3d718363a0       kube-controller-manager-kubernetes-upgrade-959423
	fd2cf131e8ce8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   28 seconds ago      Running             kube-proxy                2                   2dbe7ce3a39c0       kube-proxy-rvzqv
	5232c1d4f851a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago      Running             etcd                      2                   63f09472dbb55       etcd-kubernetes-upgrade-959423
	cfde39dec3043       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   29 seconds ago      Exited              kube-apiserver            2                   8199966d58fe4       kube-apiserver-kubernetes-upgrade-959423
	2aff3e68be48f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   42 seconds ago      Exited              coredns                   1                   25e0a6f2b464a       coredns-6f6b679f8f-r5ndr
	658e8daeca008       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   42 seconds ago      Exited              coredns                   1                   7e9e54e5217fd       coredns-6f6b679f8f-pd2nh
	e943e13d12783       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   43 seconds ago      Exited              kube-proxy                1                   61044891942a7       kube-proxy-rvzqv
	da460ec11253b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   43 seconds ago      Exited              etcd                      1                   5a6a906b1b014       etcd-kubernetes-upgrade-959423
	d2ef3791c7be9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   43 seconds ago      Exited              kube-scheduler            1                   add18a78b629e       kube-scheduler-kubernetes-upgrade-959423
	56defb60ba7ce       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   43 seconds ago      Exited              kube-controller-manager   1                   3ea45d939c75d       kube-controller-manager-kubernetes-upgrade-959423
	fdc78cc5a6553       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   52 seconds ago      Exited              storage-provisioner       1                   b6c3a1ad9a2c7       storage-provisioner
	
	
	==> coredns [2aff3e68be48f9599020f82e1340bbc79259a42ab5512d5fdb3a0b7fc7a67a9b] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2bc55c0a701f02ece001740a446fe1447293395738453797928479f3dbc509ce] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [658e8daeca008aa1eb3f711da053d9e7b2074bcfa5e0da1d3d5111c936c71c5f] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f4f2183a2ce543990eb1c441662155a8efd7ee868b91b51056b62d0fb4c2692e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-959423
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-959423
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:50:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-959423
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:51:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:51:41 +0000   Fri, 06 Sep 2024 19:50:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:51:41 +0000   Fri, 06 Sep 2024 19:50:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:51:41 +0000   Fri, 06 Sep 2024 19:50:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:51:41 +0000   Fri, 06 Sep 2024 19:50:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    kubernetes-upgrade-959423
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b56eaabc9d5e44cf818e2522f618aa51
	  System UUID:                b56eaabc-9d5e-44cf-818e-2522f618aa51
	  Boot ID:                    1d2fe432-4fc1-493c-a38f-80c7392adccb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-pd2nh                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     84s
	  kube-system                 coredns-6f6b679f8f-r5ndr                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     84s
	  kube-system                 etcd-kubernetes-upgrade-959423                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         83s
	  kube-system                 kube-apiserver-kubernetes-upgrade-959423             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-959423    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-rvzqv                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-kubernetes-upgrade-959423             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 83s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s (x8 over 96s)  kubelet          Node kubernetes-upgrade-959423 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x8 over 96s)  kubelet          Node kubernetes-upgrade-959423 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x7 over 96s)  kubelet          Node kubernetes-upgrade-959423 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                node-controller  Node kubernetes-upgrade-959423 event: Registered Node kubernetes-upgrade-959423 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-959423 event: Registered Node kubernetes-upgrade-959423 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 19:50] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.063012] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.047237] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.205098] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.109103] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.274459] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +4.296167] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +0.063083] kauditd_printk_skb: 134 callbacks suppressed
	[  +2.037023] systemd-fstab-generator[838]: Ignoring "noauto" option for root device
	[  +8.380328] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.097603] kauditd_printk_skb: 93 callbacks suppressed
	[  +4.998690] kauditd_printk_skb: 84 callbacks suppressed
	[ +30.082468] kauditd_printk_skb: 13 callbacks suppressed
	[Sep 6 19:51] systemd-fstab-generator[2508]: Ignoring "noauto" option for root device
	[  +0.330325] systemd-fstab-generator[2657]: Ignoring "noauto" option for root device
	[  +0.523499] systemd-fstab-generator[2841]: Ignoring "noauto" option for root device
	[  +0.257006] systemd-fstab-generator[2892]: Ignoring "noauto" option for root device
	[  +0.509241] systemd-fstab-generator[2993]: Ignoring "noauto" option for root device
	[ +11.207707] systemd-fstab-generator[3323]: Ignoring "noauto" option for root device
	[  +0.098872] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.298946] systemd-fstab-generator[4066]: Ignoring "noauto" option for root device
	[ +19.496141] kauditd_printk_skb: 145 callbacks suppressed
	[  +5.303391] systemd-fstab-generator[4509]: Ignoring "noauto" option for root device
	[  +0.121939] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [5232c1d4f851ae77c04745b0c85c2ff160ba1e031642b3a8a64cc1cd5305779a] <==
	{"level":"info","ts":"2024-09-06T19:51:38.789954Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.27:2380"}
	{"level":"info","ts":"2024-09-06T19:51:38.790715Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-06T19:51:38.791368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d switched to configuration voters=(1793175984297442829)"}
	{"level":"info","ts":"2024-09-06T19:51:38.791611Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7b271957de89ff6f","local-member-id":"18e2a40e9bcf7e0d","added-peer-id":"18e2a40e9bcf7e0d","added-peer-peer-urls":["https://192.168.39.27:2380"]}
	{"level":"info","ts":"2024-09-06T19:51:38.792723Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7b271957de89ff6f","local-member-id":"18e2a40e9bcf7e0d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:51:38.792774Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:51:38.794911Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:51:38.794964Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:51:38.794974Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:51:39.971888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-06T19:51:39.971992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:51:39.972054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d received MsgPreVoteResp from 18e2a40e9bcf7e0d at term 2"}
	{"level":"info","ts":"2024-09-06T19:51:39.972090Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d became candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:39.972114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d received MsgVoteResp from 18e2a40e9bcf7e0d at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:39.972150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d became leader at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:39.972175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18e2a40e9bcf7e0d elected leader 18e2a40e9bcf7e0d at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:39.982981Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"18e2a40e9bcf7e0d","local-member-attributes":"{Name:kubernetes-upgrade-959423 ClientURLs:[https://192.168.39.27:2379]}","request-path":"/0/members/18e2a40e9bcf7e0d/attributes","cluster-id":"7b271957de89ff6f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:51:39.983195Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:39.984379Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:39.985258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.27:2379"}
	{"level":"info","ts":"2024-09-06T19:51:39.985316Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:39.986330Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:39.987092Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:51:39.991624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:51:39.991689Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [da460ec11253b86af33172d47f88d4654f297cf774d8c679a0ada4e4411efdfe] <==
	{"level":"info","ts":"2024-09-06T19:51:02.985728Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-06T19:51:03.081754Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"7b271957de89ff6f","local-member-id":"18e2a40e9bcf7e0d","commit-index":422}
	{"level":"info","ts":"2024-09-06T19:51:03.082229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-06T19:51:03.109745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d became follower at term 2"}
	{"level":"info","ts":"2024-09-06T19:51:03.109805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 18e2a40e9bcf7e0d [peers: [], term: 2, commit: 422, applied: 0, lastindex: 422, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-06T19:51:03.112838Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-06T19:51:03.120246Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":407}
	{"level":"info","ts":"2024-09-06T19:51:03.127748Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-06T19:51:03.141729Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"18e2a40e9bcf7e0d","timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:51:03.142674Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"18e2a40e9bcf7e0d"}
	{"level":"info","ts":"2024-09-06T19:51:03.142827Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"18e2a40e9bcf7e0d","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-06T19:51:03.143970Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-06T19:51:03.144128Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:51:03.144178Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:51:03.144191Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:51:03.144392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18e2a40e9bcf7e0d switched to configuration voters=(1793175984297442829)"}
	{"level":"info","ts":"2024-09-06T19:51:03.144472Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7b271957de89ff6f","local-member-id":"18e2a40e9bcf7e0d","added-peer-id":"18e2a40e9bcf7e0d","added-peer-peer-urls":["https://192.168.39.27:2380"]}
	{"level":"info","ts":"2024-09-06T19:51:03.144639Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7b271957de89ff6f","local-member-id":"18e2a40e9bcf7e0d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:51:03.144664Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:51:03.145009Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:03.149250Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T19:51:03.149668Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"18e2a40e9bcf7e0d","initial-advertise-peer-urls":["https://192.168.39.27:2380"],"listen-peer-urls":["https://192.168.39.27:2380"],"advertise-client-urls":["https://192.168.39.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T19:51:03.149802Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T19:51:03.149945Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.27:2380"}
	{"level":"info","ts":"2024-09-06T19:51:03.150042Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.27:2380"}
	
	
	==> kernel <==
	 19:51:45 up 2 min,  0 users,  load average: 1.09, 0.33, 0.12
	Linux kubernetes-upgrade-959423 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [595dae767a70b52ce1dc0d2b18062690a58c8b76b93dbbffc0c4d10ea525680a] <==
	I0906 19:51:41.433957       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:51:41.434687       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:51:41.434856       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0906 19:51:41.434968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:51:41.438893       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:51:41.438954       1 policy_source.go:224] refreshing policies
	I0906 19:51:41.445342       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:51:41.445429       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:51:41.445593       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:51:41.445780       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:51:41.445812       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:51:41.445837       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:51:41.445859       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:51:41.446817       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:51:41.453144       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0906 19:51:41.468260       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0906 19:51:41.505091       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:51:42.310097       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0906 19:51:42.963942       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0906 19:51:42.985507       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0906 19:51:43.060220       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 19:51:43.143224       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:51:43.153265       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0906 19:51:44.262784       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:51:45.127355       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cfde39dec30436cb7f33f6fb3f8b9f270da79428f1982c86e925e84a8e5028e8] <==
	I0906 19:51:16.918005       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0906 19:51:17.467173       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:17.467301       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0906 19:51:17.468105       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0906 19:51:17.474450       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0906 19:51:17.477632       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0906 19:51:17.477723       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0906 19:51:17.477900       1 instance.go:232] Using reconciler: lease
	W0906 19:51:17.478758       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:18.468169       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:18.468217       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:18.479398       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:20.167819       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:20.273099       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:20.380819       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:22.533884       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:22.730017       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:23.283140       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:26.028293       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:26.292386       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:26.800251       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:32.052705       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:32.098887       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 19:51:32.746086       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0906 19:51:37.479062       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [56defb60ba7ce66bf02b36d6890c130247ffd966f3c42a5c77faaebed9a68728] <==
	
	
	==> kube-controller-manager [7a273bb32ec7a5979b2f636f5d98d82dfa6b30a987e85929e5f8754c64fade17] <==
	I0906 19:51:44.881375       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0906 19:51:44.881383       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0906 19:51:44.881391       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0906 19:51:44.881511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-959423"
	I0906 19:51:44.883884       1 shared_informer.go:320] Caches are synced for TTL
	I0906 19:51:44.883883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="215.780252ms"
	I0906 19:51:44.884184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="73.416µs"
	I0906 19:51:44.891443       1 shared_informer.go:320] Caches are synced for attach detach
	I0906 19:51:44.895894       1 shared_informer.go:320] Caches are synced for daemon sets
	I0906 19:51:44.903818       1 shared_informer.go:320] Caches are synced for persistent volume
	I0906 19:51:44.918925       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0906 19:51:44.919001       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-959423"
	I0906 19:51:44.919527       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0906 19:51:44.919617       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0906 19:51:44.919637       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0906 19:51:44.919673       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0906 19:51:44.920618       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0906 19:51:44.921785       1 shared_informer.go:320] Caches are synced for taint
	I0906 19:51:44.921945       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0906 19:51:44.922016       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-959423"
	I0906 19:51:44.922060       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0906 19:51:44.960308       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0906 19:51:45.355714       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:51:45.355753       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0906 19:51:45.367956       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [e943e13d127832f5e5666b3e1c0b743669f74096981d7bb78b6107e21d6d779d] <==
	
	
	==> kube-proxy [fd2cf131e8ce847ea91db91d74f31bf12d781b0be060cc886b54469c5fdfb7fe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:51:42.271438       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:51:42.288082       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.27"]
	E0906 19:51:42.288257       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:51:42.344679       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:51:42.344763       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:51:42.344807       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:51:42.351207       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:51:42.351477       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:51:42.351504       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:42.353151       1 config.go:197] "Starting service config controller"
	I0906 19:51:42.353208       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:51:42.353236       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:51:42.353262       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:51:42.354106       1 config.go:326] "Starting node config controller"
	I0906 19:51:42.354131       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:51:42.453764       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:51:42.453833       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:51:42.454315       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [44ef040d638b01e160cca3f51a534686779a23b8d8a1cfb11ff0f0b4e0a7ca5e] <==
	I0906 19:51:39.735662       1 serving.go:386] Generated self-signed cert in-memory
	W0906 19:51:41.365737       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:51:41.365883       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:51:41.365914       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:51:41.365997       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:51:41.466471       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 19:51:41.466664       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:41.469196       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:51:41.469354       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:51:41.470063       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 19:51:41.470170       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:51:41.570700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d2ef3791c7be9948cf7046555bfdea9d17a990f6de2f5b1fcb7d7185acc934c0] <==
	
	
	==> kubelet <==
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: E0906 19:51:38.486833    4073 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.39.27:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.27:58334->192.168.39.27:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: W0906 19:51:38.486652    4073 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-959423&limit=500&resourceVersion=0": dial tcp 192.168.39.27:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.27:58360->192.168.39.27:8443: read: connection reset by peer
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: E0906 19:51:38.486986    4073 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-959423&limit=500&resourceVersion=0\": dial tcp 192.168.39.27:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.27:58360->192.168.39.27:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: E0906 19:51:38.487680    4073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-959423?timeout=10s\": dial tcp 192.168.39.27:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.27:58290->192.168.39.27:8443: read: connection reset by peer" interval="400ms"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:38.668093    4073 scope.go:117] "RemoveContainer" containerID="da460ec11253b86af33172d47f88d4654f297cf774d8c679a0ada4e4411efdfe"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:38.670146    4073 scope.go:117] "RemoveContainer" containerID="56defb60ba7ce66bf02b36d6890c130247ffd966f3c42a5c77faaebed9a68728"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:38.670989    4073 scope.go:117] "RemoveContainer" containerID="d2ef3791c7be9948cf7046555bfdea9d17a990f6de2f5b1fcb7d7185acc934c0"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:38.690969    4073 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-959423"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: E0906 19:51:38.696057    4073 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.27:8443: connect: connection refused" node="kubernetes-upgrade-959423"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: E0906 19:51:38.820499    4073 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652298819739038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: E0906 19:51:38.820540    4073 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652298819739038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: E0906 19:51:38.889775    4073 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-959423?timeout=10s\": dial tcp 192.168.39.27:8443: connect: connection refused" interval="800ms"
	Sep 06 19:51:38 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:38.951121    4073 scope.go:117] "RemoveContainer" containerID="cfde39dec30436cb7f33f6fb3f8b9f270da79428f1982c86e925e84a8e5028e8"
	Sep 06 19:51:40 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:40.298268    4073 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-959423"
	Sep 06 19:51:41 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:41.546129    4073 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-959423"
	Sep 06 19:51:41 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:41.546629    4073 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-959423"
	Sep 06 19:51:41 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:41.546783    4073 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 06 19:51:41 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:41.548467    4073 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 06 19:51:41 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:41.704114    4073 apiserver.go:52] "Watching apiserver"
	Sep 06 19:51:41 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:41.723190    4073 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 06 19:51:41 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:41.771306    4073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aadd6e4-96ac-44ab-9291-74395e459e0d-xtables-lock\") pod \"kube-proxy-rvzqv\" (UID: \"0aadd6e4-96ac-44ab-9291-74395e459e0d\") " pod="kube-system/kube-proxy-rvzqv"
	Sep 06 19:51:41 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:41.771394    4073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aadd6e4-96ac-44ab-9291-74395e459e0d-lib-modules\") pod \"kube-proxy-rvzqv\" (UID: \"0aadd6e4-96ac-44ab-9291-74395e459e0d\") " pod="kube-system/kube-proxy-rvzqv"
	Sep 06 19:51:41 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:41.771417    4073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7595906a-7f0a-472b-9b25-9ed8358a5e19-tmp\") pod \"storage-provisioner\" (UID: \"7595906a-7f0a-472b-9b25-9ed8358a5e19\") " pod="kube-system/storage-provisioner"
	Sep 06 19:51:42 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:42.008836    4073 scope.go:117] "RemoveContainer" containerID="e943e13d127832f5e5666b3e1c0b743669f74096981d7bb78b6107e21d6d779d"
	Sep 06 19:51:42 kubernetes-upgrade-959423 kubelet[4073]: I0906 19:51:42.009155    4073 scope.go:117] "RemoveContainer" containerID="fdc78cc5a655321939aec201db522d171923aba0903cb32d86a06ff98158adc8"
	
	
	==> storage-provisioner [5e7b273b6ce7a05f2c36a2f9ad86f6dfceec2ae118f6b597549570cc0fbad606] <==
	I0906 19:51:42.076497       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 19:51:42.113295       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 19:51:42.113372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [fdc78cc5a655321939aec201db522d171923aba0903cb32d86a06ff98158adc8] <==
	I0906 19:50:52.620985       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 19:50:52.641925       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 19:50:52.642242       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 19:50:52.654469       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 19:50:52.654807       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-959423_6ba65bd1-a809-4111-9be9-6a7b217ca14f!
	I0906 19:50:52.655104       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"38f69bbb-3a79-4327-9d01-9a7db9b7910a", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-959423_6ba65bd1-a809-4111-9be9-6a7b217ca14f became leader
	I0906 19:50:52.756106       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-959423_6ba65bd1-a809-4111-9be9-6a7b217ca14f!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:51:44.802149   58007 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19576-6021/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-959423 -n kubernetes-upgrade-959423
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-959423 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-959423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-959423
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-959423: (1.116552094s)
--- FAIL: TestKubernetesUpgrade (435.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (55.56s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-306799 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0906 19:51:44.178553   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-306799 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.516593231s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-306799] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-306799" primary control-plane node in "pause-306799" cluster
	* Updating the running kvm2 "pause-306799" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-306799" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:50:57.650137   57714 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:50:57.650254   57714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:50:57.650265   57714 out.go:358] Setting ErrFile to fd 2...
	I0906 19:50:57.650271   57714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:50:57.650463   57714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:50:57.650993   57714 out.go:352] Setting JSON to false
	I0906 19:50:57.651936   57714 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5607,"bootTime":1725646651,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:50:57.651997   57714 start.go:139] virtualization: kvm guest
	I0906 19:50:57.654217   57714 out.go:177] * [pause-306799] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:50:57.655384   57714 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:50:57.655429   57714 notify.go:220] Checking for updates...
	I0906 19:50:57.657955   57714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:50:57.659318   57714 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:50:57.660548   57714 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:50:57.661694   57714 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:50:57.662774   57714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:50:57.664218   57714 config.go:182] Loaded profile config "pause-306799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:50:57.664679   57714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:50:57.664727   57714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:50:57.682235   57714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37073
	I0906 19:50:57.682714   57714 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:50:57.683301   57714 main.go:141] libmachine: Using API Version  1
	I0906 19:50:57.683346   57714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:50:57.683769   57714 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:50:57.683971   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:50:57.684280   57714 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:50:57.684734   57714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:50:57.684782   57714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:50:57.699914   57714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37759
	I0906 19:50:57.700369   57714 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:50:57.700925   57714 main.go:141] libmachine: Using API Version  1
	I0906 19:50:57.700954   57714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:50:57.701284   57714 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:50:57.701523   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:50:57.758615   57714 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 19:50:57.791637   57714 start.go:297] selected driver: kvm2
	I0906 19:50:57.791664   57714 start.go:901] validating driver "kvm2" against &{Name:pause-306799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:50:57.791831   57714 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:50:57.792194   57714 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:50:57.792288   57714 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:50:57.808705   57714 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:50:57.809475   57714 cni.go:84] Creating CNI manager for ""
	I0906 19:50:57.809492   57714 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:50:57.809566   57714 start.go:340] cluster config:
	{Name:pause-306799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false stor
age-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:50:57.809713   57714 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:50:57.856274   57714 out.go:177] * Starting "pause-306799" primary control-plane node in "pause-306799" cluster
	I0906 19:50:57.870082   57714 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:50:57.870173   57714 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:50:57.870189   57714 cache.go:56] Caching tarball of preloaded images
	I0906 19:50:57.870303   57714 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:50:57.870334   57714 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:50:57.870487   57714 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/config.json ...
	I0906 19:50:57.870691   57714 start.go:360] acquireMachinesLock for pause-306799: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:51:01.217769   57714 start.go:364] duration metric: took 3.347048125s to acquireMachinesLock for "pause-306799"
	I0906 19:51:01.217822   57714 start.go:96] Skipping create...Using existing machine configuration
	I0906 19:51:01.217846   57714 fix.go:54] fixHost starting: 
	I0906 19:51:01.218271   57714 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:51:01.218332   57714 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:51:01.238626   57714 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45061
	I0906 19:51:01.239102   57714 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:51:01.239605   57714 main.go:141] libmachine: Using API Version  1
	I0906 19:51:01.239628   57714 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:51:01.239942   57714 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:51:01.240133   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:01.240289   57714 main.go:141] libmachine: (pause-306799) Calling .GetState
	I0906 19:51:01.241885   57714 fix.go:112] recreateIfNeeded on pause-306799: state=Running err=<nil>
	W0906 19:51:01.241903   57714 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 19:51:01.243569   57714 out.go:177] * Updating the running kvm2 "pause-306799" VM ...
	I0906 19:51:01.244869   57714 machine.go:93] provisionDockerMachine start ...
	I0906 19:51:01.244892   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:01.245074   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.247985   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.248372   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.248395   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.248561   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.248713   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.248888   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.249040   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.249210   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:01.249443   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:01.249461   57714 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 19:51:01.350472   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-306799
	
	I0906 19:51:01.350522   57714 main.go:141] libmachine: (pause-306799) Calling .GetMachineName
	I0906 19:51:01.350911   57714 buildroot.go:166] provisioning hostname "pause-306799"
	I0906 19:51:01.350941   57714 main.go:141] libmachine: (pause-306799) Calling .GetMachineName
	I0906 19:51:01.351175   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.354073   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.354570   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.354599   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.354793   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.354996   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.355154   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.355280   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.355438   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:01.355658   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:01.355676   57714 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-306799 && echo "pause-306799" | sudo tee /etc/hostname
	I0906 19:51:01.476743   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-306799
	
	I0906 19:51:01.476766   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.479455   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.479739   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.479766   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.479930   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.480151   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.480312   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.480467   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.480635   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:01.480824   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:01.480840   57714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-306799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-306799/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-306799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:51:01.583236   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:51:01.583282   57714 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:51:01.583306   57714 buildroot.go:174] setting up certificates
	I0906 19:51:01.583317   57714 provision.go:84] configureAuth start
	I0906 19:51:01.583340   57714 main.go:141] libmachine: (pause-306799) Calling .GetMachineName
	I0906 19:51:01.583621   57714 main.go:141] libmachine: (pause-306799) Calling .GetIP
	I0906 19:51:01.586914   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.587209   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.587282   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.587587   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.590362   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.590865   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.590888   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.591104   57714 provision.go:143] copyHostCerts
	I0906 19:51:01.591169   57714 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:51:01.591190   57714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:51:01.591266   57714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:51:01.591408   57714 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:51:01.591422   57714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:51:01.591471   57714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:51:01.591577   57714 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:51:01.591590   57714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:51:01.591623   57714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:51:01.591735   57714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.pause-306799 san=[127.0.0.1 192.168.50.125 localhost minikube pause-306799]
	I0906 19:51:01.687734   57714 provision.go:177] copyRemoteCerts
	I0906 19:51:01.687804   57714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:51:01.687833   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.690731   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.691126   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.691151   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.691423   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.691649   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.691842   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.691990   57714 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/pause-306799/id_rsa Username:docker}
	I0906 19:51:01.780081   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:51:01.813165   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0906 19:51:01.843688   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 19:51:01.872315   57714 provision.go:87] duration metric: took 288.978668ms to configureAuth
	I0906 19:51:01.872346   57714 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:51:01.872615   57714 config.go:182] Loaded profile config "pause-306799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:51:01.872713   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:01.876306   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.876758   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:01.876815   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:01.877002   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:01.877202   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.877403   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:01.877548   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:01.877766   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:01.877991   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:01.878017   57714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:51:07.417181   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:51:07.417212   57714 machine.go:96] duration metric: took 6.172326991s to provisionDockerMachine
	I0906 19:51:07.417225   57714 start.go:293] postStartSetup for "pause-306799" (driver="kvm2")
	I0906 19:51:07.417238   57714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:51:07.417267   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.417590   57714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:51:07.417621   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:07.420555   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.420936   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.420965   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.421127   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:07.421302   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.421446   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:07.421607   57714 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/pause-306799/id_rsa Username:docker}
	I0906 19:51:07.499466   57714 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:51:07.503831   57714 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:51:07.503856   57714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:51:07.503926   57714 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:51:07.504019   57714 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:51:07.504144   57714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:51:07.513654   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:51:07.539136   57714 start.go:296] duration metric: took 121.896794ms for postStartSetup
	I0906 19:51:07.539183   57714 fix.go:56] duration metric: took 6.321349013s for fixHost
	I0906 19:51:07.539208   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:07.541515   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.541795   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.541827   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.541942   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:07.542124   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.542288   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.542426   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:07.542617   57714 main.go:141] libmachine: Using SSH client type: native
	I0906 19:51:07.542766   57714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.125 22 <nil> <nil>}
	I0906 19:51:07.542776   57714 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:51:07.641900   57714 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725652267.632838814
	
	I0906 19:51:07.641923   57714 fix.go:216] guest clock: 1725652267.632838814
	I0906 19:51:07.641930   57714 fix.go:229] Guest: 2024-09-06 19:51:07.632838814 +0000 UTC Remote: 2024-09-06 19:51:07.539188931 +0000 UTC m=+9.926937901 (delta=93.649883ms)
	I0906 19:51:07.641951   57714 fix.go:200] guest clock delta is within tolerance: 93.649883ms
	I0906 19:51:07.641957   57714 start.go:83] releasing machines lock for "pause-306799", held for 6.424160144s
	I0906 19:51:07.641980   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.642217   57714 main.go:141] libmachine: (pause-306799) Calling .GetIP
	I0906 19:51:07.644942   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.645310   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.645350   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.645500   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.646042   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.646222   57714 main.go:141] libmachine: (pause-306799) Calling .DriverName
	I0906 19:51:07.646303   57714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:51:07.646339   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:07.646445   57714 ssh_runner.go:195] Run: cat /version.json
	I0906 19:51:07.646464   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHHostname
	I0906 19:51:07.649162   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.649347   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.649600   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.649625   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.649766   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:07.649890   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:07.649914   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:07.649938   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.650067   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHPort
	I0906 19:51:07.650121   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:07.650216   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHKeyPath
	I0906 19:51:07.650297   57714 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/pause-306799/id_rsa Username:docker}
	I0906 19:51:07.650475   57714 main.go:141] libmachine: (pause-306799) Calling .GetSSHUsername
	I0906 19:51:07.650626   57714 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/pause-306799/id_rsa Username:docker}
	I0906 19:51:07.722431   57714 ssh_runner.go:195] Run: systemctl --version
	I0906 19:51:07.745593   57714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:51:07.904565   57714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:51:07.912796   57714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:51:07.912878   57714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:51:07.922631   57714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0906 19:51:07.922656   57714 start.go:495] detecting cgroup driver to use...
	I0906 19:51:07.922725   57714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:51:07.940234   57714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:51:07.955832   57714 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:51:07.955910   57714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:51:07.971533   57714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:51:07.987816   57714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:51:08.135894   57714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:51:08.278766   57714 docker.go:233] disabling docker service ...
	I0906 19:51:08.278855   57714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:51:08.302182   57714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:51:08.319614   57714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:51:08.468084   57714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:51:08.612332   57714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:51:08.629590   57714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:51:08.654792   57714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 19:51:08.654868   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.671184   57714 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:51:08.671253   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.682935   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.698941   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.713568   57714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:51:08.725718   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.737928   57714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.751620   57714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:51:08.763786   57714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:51:08.774548   57714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:51:08.785199   57714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:51:08.934698   57714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:51:09.146920   57714 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:51:09.146992   57714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:51:09.153221   57714 start.go:563] Will wait 60s for crictl version
	I0906 19:51:09.153289   57714 ssh_runner.go:195] Run: which crictl
	I0906 19:51:09.157255   57714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:51:09.194447   57714 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:51:09.194541   57714 ssh_runner.go:195] Run: crio --version
	I0906 19:51:09.226208   57714 ssh_runner.go:195] Run: crio --version
	I0906 19:51:09.258357   57714 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 19:51:09.259500   57714 main.go:141] libmachine: (pause-306799) Calling .GetIP
	I0906 19:51:09.261915   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:09.262218   57714 main.go:141] libmachine: (pause-306799) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:3c:a3", ip: ""} in network mk-pause-306799: {Iface:virbr2 ExpiryTime:2024-09-06 20:50:16 +0000 UTC Type:0 Mac:52:54:00:e6:3c:a3 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:pause-306799 Clientid:01:52:54:00:e6:3c:a3}
	I0906 19:51:09.262244   57714 main.go:141] libmachine: (pause-306799) DBG | domain pause-306799 has defined IP address 192.168.50.125 and MAC address 52:54:00:e6:3c:a3 in network mk-pause-306799
	I0906 19:51:09.262515   57714 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0906 19:51:09.266957   57714 kubeadm.go:883] updating cluster {Name:pause-306799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:51:09.267123   57714 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:51:09.267166   57714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:51:09.312254   57714 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:51:09.312279   57714 crio.go:433] Images already preloaded, skipping extraction
	I0906 19:51:09.312331   57714 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:51:09.347222   57714 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 19:51:09.347243   57714 cache_images.go:84] Images are preloaded, skipping loading
	I0906 19:51:09.347251   57714 kubeadm.go:934] updating node { 192.168.50.125 8443 v1.31.0 crio true true} ...
	I0906 19:51:09.347378   57714 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-306799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:51:09.347462   57714 ssh_runner.go:195] Run: crio config
	I0906 19:51:09.401605   57714 cni.go:84] Creating CNI manager for ""
	I0906 19:51:09.401636   57714 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:51:09.401659   57714 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:51:09.401686   57714 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-306799 NodeName:pause-306799 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 19:51:09.401894   57714 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-306799"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:51:09.401980   57714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 19:51:09.412737   57714 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:51:09.412810   57714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 19:51:09.423404   57714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 19:51:09.440696   57714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:51:09.457829   57714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0906 19:51:09.475345   57714 ssh_runner.go:195] Run: grep 192.168.50.125	control-plane.minikube.internal$ /etc/hosts
	I0906 19:51:09.480454   57714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:51:09.616433   57714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:51:09.635496   57714 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799 for IP: 192.168.50.125
	I0906 19:51:09.635529   57714 certs.go:194] generating shared ca certs ...
	I0906 19:51:09.635543   57714 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:51:09.635713   57714 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:51:09.635775   57714 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:51:09.635789   57714 certs.go:256] generating profile certs ...
	I0906 19:51:09.635910   57714 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/client.key
	I0906 19:51:09.636012   57714 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/apiserver.key.246d0d9a
	I0906 19:51:09.636067   57714 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/proxy-client.key
	I0906 19:51:09.636231   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:51:09.636268   57714 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:51:09.636282   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:51:09.636317   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:51:09.636350   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:51:09.636386   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:51:09.636441   57714 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:51:09.637168   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:51:09.673319   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:51:09.706687   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:51:09.736192   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:51:09.762158   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 19:51:09.872346   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 19:51:10.076308   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:51:10.373308   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/pause-306799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 19:51:10.443189   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:51:10.572596   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:51:10.661517   57714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:51:10.732243   57714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:51:10.780383   57714 ssh_runner.go:195] Run: openssl version
	I0906 19:51:10.798643   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:51:10.819673   57714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:51:10.868596   57714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:51:10.868666   57714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:51:10.882897   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:51:10.917348   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:51:10.939430   57714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:51:10.946755   57714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:51:10.946823   57714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:51:10.960405   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:51:10.976140   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:51:10.989311   57714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:51:10.994767   57714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:51:10.994835   57714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:51:11.003892   57714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:51:11.017644   57714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:51:11.023799   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 19:51:11.032735   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 19:51:11.043673   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 19:51:11.053791   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 19:51:11.067159   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 19:51:11.075891   57714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 19:51:11.088507   57714 kubeadm.go:392] StartCluster: {Name:pause-306799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-306799 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:51:11.088630   57714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:51:11.088701   57714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:51:11.193474   57714 cri.go:89] found id: "2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe"
	I0906 19:51:11.193500   57714 cri.go:89] found id: "38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40"
	I0906 19:51:11.193504   57714 cri.go:89] found id: "b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561"
	I0906 19:51:11.193506   57714 cri.go:89] found id: "2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec"
	I0906 19:51:11.193509   57714 cri.go:89] found id: "2b9420a0e24963243b49fbda11aa9d8db70d95e55008ace0561a42167942aa14"
	I0906 19:51:11.193513   57714 cri.go:89] found id: "82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2"
	I0906 19:51:11.193515   57714 cri.go:89] found id: "91d9c7eee276948bf6e7a2cc7d2d970c7e2b2fd64d59122dc890dc4ec18a873e"
	I0906 19:51:11.193517   57714 cri.go:89] found id: "cd5f9fd41f5323bb23d32dbd7a4868e2a9e1073672e71a8de72ce6bd2720f1b4"
	I0906 19:51:11.193520   57714 cri.go:89] found id: "3f8d268b6a59eb50dfa1964841fd8387806e78a3ff267d28abafe1872b4f9c3d"
	I0906 19:51:11.193526   57714 cri.go:89] found id: "6193e6f4141c98c97a9fc5e4b80236700c909fdecff57ecaa2f1a9523fb25d50"
	I0906 19:51:11.193528   57714 cri.go:89] found id: ""
	I0906 19:51:11.193571   57714 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-306799 -n pause-306799
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-306799 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-306799 logs -n 25: (1.389646873s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p NoKubernetes-944227                | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:47 UTC |
	| start   | -p NoKubernetes-944227                | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:48 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-952957             | running-upgrade-952957    | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:47 UTC |
	| start   | -p force-systemd-flag-689823          | force-systemd-flag-689823 | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:48 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-944227 sudo           | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-944227                | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	| start   | -p cert-expiration-097103             | cert-expiration-097103    | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:49 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-098096 stop           | minikube                  | jenkins | v1.26.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	| start   | -p stopped-upgrade-098096             | stopped-upgrade-098096    | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:49 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-689823 ssh cat     | force-systemd-flag-689823 | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-689823          | force-systemd-flag-689823 | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	| start   | -p cert-options-417185                | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:49 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	| start   | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:50 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-098096             | stopped-upgrade-098096    | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	| start   | -p pause-306799 --memory=2048         | pause-306799              | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:50 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-417185 ssh               | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-417185 -- sudo        | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-417185                | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:50 UTC |
	| start   | -p auto-603826 --memory=3072          | auto-603826               | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC | 06 Sep 24 19:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-306799                       | pause-306799              | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC | 06 Sep 24 19:51 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:51 UTC | 06 Sep 24 19:51 UTC |
	| start   | -p kindnet-603826                     | kindnet-603826            | jenkins | v1.34.0 | 06 Sep 24 19:51 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 19:51:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:51:48.012157   58167 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:51:48.012286   58167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:51:48.012296   58167 out.go:358] Setting ErrFile to fd 2...
	I0906 19:51:48.012300   58167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:51:48.012457   58167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:51:48.013040   58167 out.go:352] Setting JSON to false
	I0906 19:51:48.013955   58167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5657,"bootTime":1725646651,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:51:48.014012   58167 start.go:139] virtualization: kvm guest
	I0906 19:51:48.016023   58167 out.go:177] * [kindnet-603826] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:51:48.017290   58167 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:51:48.017319   58167 notify.go:220] Checking for updates...
	I0906 19:51:48.019869   58167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:51:48.021034   58167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:51:48.022218   58167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:51:48.023582   58167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:51:48.024831   58167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:51:48.026515   58167 config.go:182] Loaded profile config "auto-603826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:51:48.026641   58167 config.go:182] Loaded profile config "cert-expiration-097103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:51:48.026764   58167 config.go:182] Loaded profile config "pause-306799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:51:48.026855   58167 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:51:48.063861   58167 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 19:51:48.064915   58167 start.go:297] selected driver: kvm2
	I0906 19:51:48.064929   58167 start.go:901] validating driver "kvm2" against <nil>
	I0906 19:51:48.064939   58167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:51:48.065648   58167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:51:48.065756   58167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:51:48.081648   58167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:51:48.081690   58167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 19:51:48.081892   58167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:51:48.081953   58167 cni.go:84] Creating CNI manager for "kindnet"
	I0906 19:51:48.081965   58167 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 19:51:48.082037   58167 start.go:340] cluster config:
	{Name:kindnet-603826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-603826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0906 19:51:48.082157   58167 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:51:48.083916   58167 out.go:177] * Starting "kindnet-603826" primary control-plane node in "kindnet-603826" cluster
	I0906 19:51:48.085072   58167 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:51:48.085107   58167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:51:48.085125   58167 cache.go:56] Caching tarball of preloaded images
	I0906 19:51:48.085219   58167 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:51:48.085233   58167 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:51:48.085347   58167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/config.json ...
	I0906 19:51:48.085380   58167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/config.json: {Name:mkbc31c2f5d89fcc809ca76c473197d79a6361b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:51:48.085536   58167 start.go:360] acquireMachinesLock for kindnet-603826: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:51:48.085569   58167 start.go:364] duration metric: took 17.355µs to acquireMachinesLock for "kindnet-603826"
	I0906 19:51:48.085592   58167 start.go:93] Provisioning new machine with config: &{Name:kindnet-603826 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-603826 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 19:51:48.085668   58167 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 19:51:47.862224   57714 pod_ready.go:93] pod "kube-proxy-gkn5p" in "kube-system" namespace has status "Ready":"True"
	I0906 19:51:47.862251   57714 pod_ready.go:82] duration metric: took 400.656748ms for pod "kube-proxy-gkn5p" in "kube-system" namespace to be "Ready" ...
	I0906 19:51:47.862264   57714 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-306799" in "kube-system" namespace to be "Ready" ...
	I0906 19:51:48.261715   57714 pod_ready.go:93] pod "kube-scheduler-pause-306799" in "kube-system" namespace has status "Ready":"True"
	I0906 19:51:48.261738   57714 pod_ready.go:82] duration metric: took 399.466165ms for pod "kube-scheduler-pause-306799" in "kube-system" namespace to be "Ready" ...
	I0906 19:51:48.261746   57714 pod_ready.go:39] duration metric: took 2.577450358s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 19:51:48.261759   57714 api_server.go:52] waiting for apiserver process to appear ...
	I0906 19:51:48.261809   57714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:51:48.277065   57714 api_server.go:72] duration metric: took 2.797092849s to wait for apiserver process to appear ...
	I0906 19:51:48.277095   57714 api_server.go:88] waiting for apiserver healthz status ...
	I0906 19:51:48.277112   57714 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8443/healthz ...
	I0906 19:51:48.284123   57714 api_server.go:279] https://192.168.50.125:8443/healthz returned 200:
	ok
	I0906 19:51:48.285370   57714 api_server.go:141] control plane version: v1.31.0
	I0906 19:51:48.285398   57714 api_server.go:131] duration metric: took 8.295097ms to wait for apiserver health ...
	I0906 19:51:48.285407   57714 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 19:51:48.466273   57714 system_pods.go:59] 6 kube-system pods found
	I0906 19:51:48.466302   57714 system_pods.go:61] "coredns-6f6b679f8f-qb82l" [410a1028-fc4f-42ae-86d9-6c405a1468da] Running
	I0906 19:51:48.466307   57714 system_pods.go:61] "etcd-pause-306799" [4df0de72-66e2-4238-bcd0-b7ccb6eee73a] Running
	I0906 19:51:48.466311   57714 system_pods.go:61] "kube-apiserver-pause-306799" [fee6ee9c-6596-41c0-ac9c-99d56b9c4beb] Running
	I0906 19:51:48.466314   57714 system_pods.go:61] "kube-controller-manager-pause-306799" [6b7ef8c0-cd93-402d-95c1-4fcc296c52a6] Running
	I0906 19:51:48.466317   57714 system_pods.go:61] "kube-proxy-gkn5p" [14826bd0-43b5-4952-90f9-8f59ffc98e91] Running
	I0906 19:51:48.466320   57714 system_pods.go:61] "kube-scheduler-pause-306799" [59c40e8b-3cae-4a16-a8b1-eaf7d667e29a] Running
	I0906 19:51:48.466326   57714 system_pods.go:74] duration metric: took 180.912639ms to wait for pod list to return data ...
	I0906 19:51:48.466332   57714 default_sa.go:34] waiting for default service account to be created ...
	I0906 19:51:48.662528   57714 default_sa.go:45] found service account: "default"
	I0906 19:51:48.662561   57714 default_sa.go:55] duration metric: took 196.222191ms for default service account to be created ...
	I0906 19:51:48.662579   57714 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 19:51:48.863838   57714 system_pods.go:86] 6 kube-system pods found
	I0906 19:51:48.863867   57714 system_pods.go:89] "coredns-6f6b679f8f-qb82l" [410a1028-fc4f-42ae-86d9-6c405a1468da] Running
	I0906 19:51:48.863873   57714 system_pods.go:89] "etcd-pause-306799" [4df0de72-66e2-4238-bcd0-b7ccb6eee73a] Running
	I0906 19:51:48.863876   57714 system_pods.go:89] "kube-apiserver-pause-306799" [fee6ee9c-6596-41c0-ac9c-99d56b9c4beb] Running
	I0906 19:51:48.863880   57714 system_pods.go:89] "kube-controller-manager-pause-306799" [6b7ef8c0-cd93-402d-95c1-4fcc296c52a6] Running
	I0906 19:51:48.863884   57714 system_pods.go:89] "kube-proxy-gkn5p" [14826bd0-43b5-4952-90f9-8f59ffc98e91] Running
	I0906 19:51:48.863889   57714 system_pods.go:89] "kube-scheduler-pause-306799" [59c40e8b-3cae-4a16-a8b1-eaf7d667e29a] Running
	I0906 19:51:48.863898   57714 system_pods.go:126] duration metric: took 201.311721ms to wait for k8s-apps to be running ...
	I0906 19:51:48.863906   57714 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 19:51:48.863955   57714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:51:48.883163   57714 system_svc.go:56] duration metric: took 19.237095ms WaitForService to wait for kubelet
	I0906 19:51:48.883198   57714 kubeadm.go:582] duration metric: took 3.403229464s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:51:48.883221   57714 node_conditions.go:102] verifying NodePressure condition ...
	I0906 19:51:49.062389   57714 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 19:51:49.062412   57714 node_conditions.go:123] node cpu capacity is 2
	I0906 19:51:49.062422   57714 node_conditions.go:105] duration metric: took 179.196825ms to run NodePressure ...
	I0906 19:51:49.062433   57714 start.go:241] waiting for startup goroutines ...
	I0906 19:51:49.062440   57714 start.go:246] waiting for cluster config update ...
	I0906 19:51:49.062446   57714 start.go:255] writing updated cluster config ...
	I0906 19:51:49.062717   57714 ssh_runner.go:195] Run: rm -f paused
	I0906 19:51:49.110865   57714 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 19:51:49.112907   57714 out.go:177] * Done! kubectl is now configured to use "pause-306799" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.827657615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=379055d2-eedc-4131-886b-83c5fa49c536 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.829923157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a2f93bc-204b-44fa-92f7-ec039d0708c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.830425596Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652309830401442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a2f93bc-204b-44fa-92f7-ec039d0708c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.831514783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9fdf76a-2610-4e4b-a5a2-631aedaa423c name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.831584771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9fdf76a-2610-4e4b-a5a2-631aedaa423c name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.831851283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652287924368926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652284160762737,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb
47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652284147541842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beeb
edfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725652284154566771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652284124565368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4,PodSandboxId:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725652270354782983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652271140832842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652270345362961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652270357872271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube
-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652270227338198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652270150567824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2,PodSandboxId:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652248282743368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9fdf76a-2610-4e4b-a5a2-631aedaa423c name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.876869976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=962ff9f9-b2f5-4047-ae96-8a41472fb868 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.876983639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=962ff9f9-b2f5-4047-ae96-8a41472fb868 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.878949288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8ec3b73-70f5-44c1-8583-81d5f392dea4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.879473945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652309879437136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8ec3b73-70f5-44c1-8583-81d5f392dea4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.879892639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77ca81a7-8e67-4a6f-8f55-a637c6a3437c name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.879961537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77ca81a7-8e67-4a6f-8f55-a637c6a3437c name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.880328698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652287924368926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652284160762737,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb
47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652284147541842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beeb
edfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725652284154566771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652284124565368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4,PodSandboxId:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725652270354782983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652271140832842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652270345362961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652270357872271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube
-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652270227338198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652270150567824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2,PodSandboxId:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652248282743368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77ca81a7-8e67-4a6f-8f55-a637c6a3437c name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.920891264Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d0d6770c-a340-4ddf-ad99-7455cb6c43f9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.921114283Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-qb82l,Uid:410a1028-fc4f-42ae-86d9-6c405a1468da,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652270080316895,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T19:50:48.106896455Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-306799,Uid:dbf9034173e478e72bf32beebedfc2ae,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652269899124940,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dbf9034173e478e72bf32beebedfc2ae,kubernetes.io/config.seen: 2024-09-06T19:50:42.547584474Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-306799,Uid:7325d254b7ce37e89638d8242cad943f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652269898082276,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,tier: c
ontrol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.125:8443,kubernetes.io/config.hash: 7325d254b7ce37e89638d8242cad943f,kubernetes.io/config.seen: 2024-09-06T19:50:42.547581251Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&PodSandboxMetadata{Name:etcd-pause-306799,Uid:870a6fb47feef38d1c3d0a18e2f0c5ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652269897381628,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.125:2379,kubernetes.io/config.hash: 870a6fb47feef38d1c3d0a18e2f0c5ec,kubernetes.io/config.seen: 2024-09-06T19:50:42.547576842Z,kuberne
tes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-306799,Uid:5dbae4fd4cbf227454c0750af6cecf90,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652269865085757,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5dbae4fd4cbf227454c0750af6cecf90,kubernetes.io/config.seen: 2024-09-06T19:50:42.547583087Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&PodSandboxMetadata{Name:kube-proxy-gkn5p,Uid:14826bd0-43b5-4952-90f9-8f59ffc98e91,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1725652269842714565,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T19:50:47.843312963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&PodSandboxMetadata{Name:kube-proxy-gkn5p,Uid:14826bd0-43b5-4952-90f9-8f59ffc98e91,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725652248157980085,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-06T19:50:47.843312963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d0d6770c-a340-4ddf-ad99-7455cb6c43f9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.921889267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78e4ef93-8a9c-421b-af15-8ef2a0495e8b name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.921997094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78e4ef93-8a9c-421b-af15-8ef2a0495e8b name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.922556886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652287924368926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652284160762737,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb
47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652284147541842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beeb
edfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725652284154566771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652284124565368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4,PodSandboxId:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725652270354782983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652271140832842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652270345362961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652270357872271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube
-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652270227338198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652270150567824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2,PodSandboxId:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652248282743368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78e4ef93-8a9c-421b-af15-8ef2a0495e8b name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.930744718Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=932b72b8-f619-46a4-bd6a-9fecf4807b6f name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.930826829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=932b72b8-f619-46a4-bd6a-9fecf4807b6f name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.932074568Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccc84f39-cfcc-4db2-af15-7332f4d2adf3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.932563575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652309932541458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccc84f39-cfcc-4db2-af15-7332f4d2adf3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.933379732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b826e9b0-aa70-4a68-b7c1-47e344ecf2a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.933446461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b826e9b0-aa70-4a68-b7c1-47e344ecf2a5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:49 pause-306799 crio[2086]: time="2024-09-06 19:51:49.933771645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652287924368926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652284160762737,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb
47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652284147541842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beeb
edfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725652284154566771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652284124565368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4,PodSandboxId:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725652270354782983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652271140832842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652270345362961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652270357872271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube
-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652270227338198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652270150567824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2,PodSandboxId:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652248282743368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b826e9b0-aa70-4a68-b7c1-47e344ecf2a5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6755a00286339       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   22 seconds ago       Running             coredns                   2                   3c7d09984028e       coredns-6f6b679f8f-qb82l
	1585f47ead716       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   25 seconds ago       Running             etcd                      2                   88b45ecc0a6e0       etcd-pause-306799
	a6aa125f02671       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   25 seconds ago       Running             kube-apiserver            2                   3db9a6cbedd05       kube-apiserver-pause-306799
	a7027ea848d25       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   25 seconds ago       Running             kube-scheduler            2                   e6c0e8cfe44a4       kube-scheduler-pause-306799
	52844a81f463b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   25 seconds ago       Running             kube-controller-manager   2                   9210a3470f8d9       kube-controller-manager-pause-306799
	ad259c357baa9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   38 seconds ago       Exited              coredns                   1                   3c7d09984028e       coredns-6f6b679f8f-qb82l
	2312d9d3305a5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   39 seconds ago       Exited              kube-apiserver            1                   3db9a6cbedd05       kube-apiserver-pause-306799
	c212965774e97       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   39 seconds ago       Running             kube-proxy                1                   5bce3ebbc39ec       kube-proxy-gkn5p
	38a62e6a740e6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   39 seconds ago       Exited              kube-scheduler            1                   e6c0e8cfe44a4       kube-scheduler-pause-306799
	b449348d8d9f2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   39 seconds ago       Exited              etcd                      1                   88b45ecc0a6e0       etcd-pause-306799
	2e9bf79eab24a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   39 seconds ago       Exited              kube-controller-manager   1                   9210a3470f8d9       kube-controller-manager-pause-306799
	82d27dc22b28f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   8f66cdb2628d8       kube-proxy-gkn5p
	
	
	==> coredns [6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48134 - 35287 "HINFO IN 7858879936472383266.3356898823924046315. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008707979s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b] <==
	
	
	==> describe nodes <==
	Name:               pause-306799
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-306799
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=pause-306799
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T19_50_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:50:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-306799
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:51:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:51:27 +0000   Fri, 06 Sep 2024 19:50:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:51:27 +0000   Fri, 06 Sep 2024 19:50:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:51:27 +0000   Fri, 06 Sep 2024 19:50:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:51:27 +0000   Fri, 06 Sep 2024 19:50:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.125
	  Hostname:    pause-306799
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4a177ab557549b2bcc6bd03a8044a0a
	  System UUID:                d4a177ab-5575-49b2-bcc6-bd03a8044a0a
	  Boot ID:                    405df62b-1d51-4a93-ac57-8d3619d7e9c6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-qb82l                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     62s
	  kube-system                 etcd-pause-306799                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         69s
	  kube-system                 kube-apiserver-pause-306799             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-pause-306799    200m (10%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-proxy-gkn5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-scheduler-pause-306799             100m (5%)     0 (0%)      0 (0%)           0 (0%)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 61s                kube-proxy       
	  Normal  Starting                 36s                kube-proxy       
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  68s                kubelet          Node pause-306799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s                kubelet          Node pause-306799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s                kubelet          Node pause-306799 status is now: NodeHasSufficientPID
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeReady                67s                kubelet          Node pause-306799 status is now: NodeReady
	  Normal  RegisteredNode           64s                node-controller  Node pause-306799 event: Registered Node pause-306799 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-306799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-306799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-306799 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node pause-306799 event: Registered Node pause-306799 in Controller
	
	
	==> dmesg <==
	[  +8.988377] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.059952] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070952] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.177644] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.140445] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.302799] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.313096] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +0.066264] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.295083] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +1.187280] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.913945] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.083637] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.848881] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +0.973208] kauditd_printk_skb: 41 callbacks suppressed
	[Sep 6 19:51] systemd-fstab-generator[2011]: Ignoring "noauto" option for root device
	[  +0.081923] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.069288] systemd-fstab-generator[2023]: Ignoring "noauto" option for root device
	[  +0.185317] systemd-fstab-generator[2037]: Ignoring "noauto" option for root device
	[  +0.150454] systemd-fstab-generator[2049]: Ignoring "noauto" option for root device
	[  +0.310089] systemd-fstab-generator[2078]: Ignoring "noauto" option for root device
	[  +0.686329] systemd-fstab-generator[2199]: Ignoring "noauto" option for root device
	[  +4.263594] kauditd_printk_skb: 196 callbacks suppressed
	[  +9.632039] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +4.557138] kauditd_printk_skb: 41 callbacks suppressed
	[ +17.571243] systemd-fstab-generator[3404]: Ignoring "noauto" option for root device
	
	
	==> etcd [1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17] <==
	{"level":"info","ts":"2024-09-06T19:51:24.568886Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T19:51:24.569731Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8e41abb37b207023","initial-advertise-peer-urls":["https://192.168.50.125:2380"],"listen-peer-urls":["https://192.168.50.125:2380"],"advertise-client-urls":["https://192.168.50.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T19:51:24.568913Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-09-06T19:51:24.569314Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:51:24.569483Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40e9c4986db8cbc5","local-member-id":"8e41abb37b207023","added-peer-id":"8e41abb37b207023","added-peer-peer-urls":["https://192.168.50.125:2380"]}
	{"level":"info","ts":"2024-09-06T19:51:24.570944Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T19:51:24.572397Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-09-06T19:51:24.572661Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40e9c4986db8cbc5","local-member-id":"8e41abb37b207023","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:51:24.572742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:51:25.833552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:25.833666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:25.833721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgPreVoteResp from 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:25.833759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became candidate at term 4"}
	{"level":"info","ts":"2024-09-06T19:51:25.833783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgVoteResp from 8e41abb37b207023 at term 4"}
	{"level":"info","ts":"2024-09-06T19:51:25.833811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became leader at term 4"}
	{"level":"info","ts":"2024-09-06T19:51:25.833848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e41abb37b207023 elected leader 8e41abb37b207023 at term 4"}
	{"level":"info","ts":"2024-09-06T19:51:25.839674Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8e41abb37b207023","local-member-attributes":"{Name:pause-306799 ClientURLs:[https://192.168.50.125:2379]}","request-path":"/0/members/8e41abb37b207023/attributes","cluster-id":"40e9c4986db8cbc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:51:25.839775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:25.840293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:25.841122Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:25.842375Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:51:25.845696Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:25.846412Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.125:2379"}
	{"level":"info","ts":"2024-09-06T19:51:25.846511Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:51:25.846540Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561] <==
	{"level":"info","ts":"2024-09-06T19:51:11.960241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:51:11.960266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgPreVoteResp from 8e41abb37b207023 at term 2"}
	{"level":"info","ts":"2024-09-06T19:51:11.960288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:11.960294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgVoteResp from 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:11.960302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became leader at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:11.960309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e41abb37b207023 elected leader 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:11.963509Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8e41abb37b207023","local-member-attributes":"{Name:pause-306799 ClientURLs:[https://192.168.50.125:2379]}","request-path":"/0/members/8e41abb37b207023/attributes","cluster-id":"40e9c4986db8cbc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:51:11.963660Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:11.966147Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:11.971021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.125:2379"}
	{"level":"info","ts":"2024-09-06T19:51:11.989229Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:11.992215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:51:11.992298Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:51:11.992851Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:11.996975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:51:21.756607Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-06T19:51:21.756657Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-306799","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.125:2380"],"advertise-client-urls":["https://192.168.50.125:2379"]}
	{"level":"warn","ts":"2024-09-06T19:51:21.756784Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:51:21.756862Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:51:21.758582Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.125:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:51:21.758613Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.125:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:51:21.760051Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8e41abb37b207023","current-leader-member-id":"8e41abb37b207023"}
	{"level":"info","ts":"2024-09-06T19:51:21.763793Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-09-06T19:51:21.764100Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-09-06T19:51:21.764119Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-306799","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.125:2380"],"advertise-client-urls":["https://192.168.50.125:2379"]}
	
	
	==> kernel <==
	 19:51:50 up 1 min,  0 users,  load average: 1.23, 0.47, 0.17
	Linux pause-306799 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe] <==
	I0906 19:51:13.665614       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0906 19:51:13.665930       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0906 19:51:13.666271       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:51:13.671909       1 controller.go:157] Shutting down quota evaluator
	I0906 19:51:13.671942       1 controller.go:176] quota evaluator worker shutdown
	I0906 19:51:13.672349       1 controller.go:176] quota evaluator worker shutdown
	I0906 19:51:13.672378       1 controller.go:176] quota evaluator worker shutdown
	I0906 19:51:13.672458       1 controller.go:176] quota evaluator worker shutdown
	I0906 19:51:13.672482       1 controller.go:176] quota evaluator worker shutdown
	E0906 19:51:14.451274       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:14.455803       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:15.451524       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:15.455750       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:16.450692       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:16.456074       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:17.450290       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:17.456141       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:18.450833       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:18.455788       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:19.450729       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:19.455854       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:20.450843       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:20.455595       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:21.450780       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:21.455541       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2] <==
	I0906 19:51:27.273834       1 policy_source.go:224] refreshing policies
	I0906 19:51:27.310298       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0906 19:51:27.310400       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:51:27.310424       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:51:27.310520       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:51:27.310591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:51:27.315936       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:51:27.315990       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:51:27.321098       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:51:27.321308       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:51:27.321321       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:51:27.321327       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:51:27.321333       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:51:27.321635       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:51:27.321843       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:51:27.322194       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:51:28.110588       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:51:28.337315       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.125]
	I0906 19:51:28.338674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:51:28.343589       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 19:51:28.891353       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0906 19:51:28.907803       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0906 19:51:28.954790       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 19:51:29.003091       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:51:29.016973       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec] <==
	I0906 19:51:11.722777       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:51:12.284983       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:51:12.285023       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:12.286895       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:51:12.287410       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:51:12.287551       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:51:12.287627       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111] <==
	I0906 19:51:30.571065       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0906 19:51:30.571232       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0906 19:51:30.571435       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0906 19:51:30.575239       1 shared_informer.go:320] Caches are synced for TTL
	I0906 19:51:30.576534       1 shared_informer.go:320] Caches are synced for taint
	I0906 19:51:30.577218       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0906 19:51:30.577323       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-306799"
	I0906 19:51:30.577376       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0906 19:51:30.580761       1 shared_informer.go:320] Caches are synced for persistent volume
	I0906 19:51:30.583519       1 shared_informer.go:320] Caches are synced for attach detach
	I0906 19:51:30.586331       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0906 19:51:30.588734       1 shared_informer.go:320] Caches are synced for endpoint
	I0906 19:51:30.600641       1 shared_informer.go:320] Caches are synced for job
	I0906 19:51:30.629858       1 shared_informer.go:320] Caches are synced for disruption
	I0906 19:51:30.700446       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0906 19:51:30.712985       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 19:51:30.749962       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0906 19:51:30.760510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="202.664099ms"
	I0906 19:51:30.760732       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 19:51:30.761149       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.658µs"
	I0906 19:51:31.202223       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:51:31.249319       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:51:31.249362       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0906 19:51:44.789764       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="17.353441ms"
	I0906 19:51:44.790489       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="131.708µs"
	
	
	==> kube-proxy [82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:50:48.519328       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:50:48.532498       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.125"]
	E0906 19:50:48.532645       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:50:48.602246       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:50:48.602444       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:50:48.602534       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:50:48.607401       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:50:48.607668       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:50:48.607682       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:50:48.613834       1 config.go:197] "Starting service config controller"
	I0906 19:50:48.613862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:50:48.613924       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:50:48.613929       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:50:48.615395       1 config.go:326] "Starting node config controller"
	I0906 19:50:48.615458       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:50:48.714344       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:50:48.714406       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:50:48.715607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:51:12.255271       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:51:13.539743       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.125"]
	E0906 19:51:13.539826       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:51:13.627146       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:51:13.627230       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:51:13.627261       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:51:13.629693       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:51:13.629919       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:51:13.629951       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:13.631348       1 config.go:197] "Starting service config controller"
	I0906 19:51:13.631388       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:51:13.631410       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:51:13.631414       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:51:13.634761       1 config.go:326] "Starting node config controller"
	I0906 19:51:13.635761       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:51:13.732304       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:51:13.732359       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:51:13.735922       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40] <==
	I0906 19:51:12.055924       1 serving.go:386] Generated self-signed cert in-memory
	W0906 19:51:13.549932       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:51:13.550039       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:51:13.550081       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:51:13.550111       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:51:13.584370       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 19:51:13.584635       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:13.586703       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 19:51:13.586820       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:51:13.587029       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:51:13.586847       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:51:13.687626       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 19:51:21.709866       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0] <==
	I0906 19:51:25.233140       1 serving.go:386] Generated self-signed cert in-memory
	W0906 19:51:27.191986       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:51:27.192033       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:51:27.192043       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:51:27.192053       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:51:27.260820       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 19:51:27.260862       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:27.264860       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 19:51:27.265056       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:51:27.265090       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:51:27.265105       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:51:27.365900       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.021421    3066 kubelet_node_status.go:72] "Attempting to register node" node="pause-306799"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: E0906 19:51:24.022240    3066 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.125:8443: connect: connection refused" node="pause-306799"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.108655    3066 scope.go:117] "RemoveContainer" containerID="2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.108785    3066 scope.go:117] "RemoveContainer" containerID="2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.110516    3066 scope.go:117] "RemoveContainer" containerID="38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.111539    3066 scope.go:117] "RemoveContainer" containerID="b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: E0906 19:51:24.246447    3066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-306799?timeout=10s\": dial tcp 192.168.50.125:8443: connect: connection refused" interval="800ms"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.424563    3066 kubelet_node_status.go:72] "Attempting to register node" node="pause-306799"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: E0906 19:51:24.425759    3066 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.125:8443: connect: connection refused" node="pause-306799"
	Sep 06 19:51:25 pause-306799 kubelet[3066]: I0906 19:51:25.228300    3066 kubelet_node_status.go:72] "Attempting to register node" node="pause-306799"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.300093    3066 kubelet_node_status.go:111] "Node was previously registered" node="pause-306799"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.300236    3066 kubelet_node_status.go:75] "Successfully registered node" node="pause-306799"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.300268    3066 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.301285    3066 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.610828    3066 apiserver.go:52] "Watching apiserver"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.641392    3066 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.702415    3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14826bd0-43b5-4952-90f9-8f59ffc98e91-lib-modules\") pod \"kube-proxy-gkn5p\" (UID: \"14826bd0-43b5-4952-90f9-8f59ffc98e91\") " pod="kube-system/kube-proxy-gkn5p"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.702557    3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14826bd0-43b5-4952-90f9-8f59ffc98e91-xtables-lock\") pod \"kube-proxy-gkn5p\" (UID: \"14826bd0-43b5-4952-90f9-8f59ffc98e91\") " pod="kube-system/kube-proxy-gkn5p"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: E0906 19:51:27.826245    3066 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-pause-306799\" already exists" pod="kube-system/etcd-pause-306799"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.914518    3066 scope.go:117] "RemoveContainer" containerID="ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b"
	Sep 06 19:51:33 pause-306799 kubelet[3066]: E0906 19:51:33.723837    3066 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652293723382398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:51:33 pause-306799 kubelet[3066]: E0906 19:51:33.723995    3066 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652293723382398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:51:34 pause-306799 kubelet[3066]: I0906 19:51:34.752443    3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 06 19:51:43 pause-306799 kubelet[3066]: E0906 19:51:43.725973    3066 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652303725357070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:51:43 pause-306799 kubelet[3066]: E0906 19:51:43.726088    3066 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652303725357070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-306799 -n pause-306799
helpers_test.go:261: (dbg) Run:  kubectl --context pause-306799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-306799 -n pause-306799
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-306799 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-306799 logs -n 25: (1.389138202s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p NoKubernetes-944227                | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:47 UTC |
	| start   | -p NoKubernetes-944227                | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:48 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-952957             | running-upgrade-952957    | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:47 UTC |
	| start   | -p force-systemd-flag-689823          | force-systemd-flag-689823 | jenkins | v1.34.0 | 06 Sep 24 19:47 UTC | 06 Sep 24 19:48 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-944227 sudo           | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-944227                | NoKubernetes-944227       | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	| start   | -p cert-expiration-097103             | cert-expiration-097103    | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:49 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-098096 stop           | minikube                  | jenkins | v1.26.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	| start   | -p stopped-upgrade-098096             | stopped-upgrade-098096    | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:49 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-689823 ssh cat     | force-systemd-flag-689823 | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-689823          | force-systemd-flag-689823 | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:48 UTC |
	| start   | -p cert-options-417185                | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:48 UTC | 06 Sep 24 19:49 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	| start   | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:50 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-098096             | stopped-upgrade-098096    | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	| start   | -p pause-306799 --memory=2048         | pause-306799              | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:50 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-417185 ssh               | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-417185 -- sudo        | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:49 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-417185                | cert-options-417185       | jenkins | v1.34.0 | 06 Sep 24 19:49 UTC | 06 Sep 24 19:50 UTC |
	| start   | -p auto-603826 --memory=3072          | auto-603826               | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC | 06 Sep 24 19:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-306799                       | pause-306799              | jenkins | v1.34.0 | 06 Sep 24 19:50 UTC | 06 Sep 24 19:51 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-959423          | kubernetes-upgrade-959423 | jenkins | v1.34.0 | 06 Sep 24 19:51 UTC | 06 Sep 24 19:51 UTC |
	| start   | -p kindnet-603826                     | kindnet-603826            | jenkins | v1.34.0 | 06 Sep 24 19:51 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 19:51:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 19:51:48.012157   58167 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:51:48.012286   58167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:51:48.012296   58167 out.go:358] Setting ErrFile to fd 2...
	I0906 19:51:48.012300   58167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:51:48.012457   58167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:51:48.013040   58167 out.go:352] Setting JSON to false
	I0906 19:51:48.013955   58167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5657,"bootTime":1725646651,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:51:48.014012   58167 start.go:139] virtualization: kvm guest
	I0906 19:51:48.016023   58167 out.go:177] * [kindnet-603826] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:51:48.017290   58167 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:51:48.017319   58167 notify.go:220] Checking for updates...
	I0906 19:51:48.019869   58167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:51:48.021034   58167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:51:48.022218   58167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:51:48.023582   58167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:51:48.024831   58167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:51:48.026515   58167 config.go:182] Loaded profile config "auto-603826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:51:48.026641   58167 config.go:182] Loaded profile config "cert-expiration-097103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:51:48.026764   58167 config.go:182] Loaded profile config "pause-306799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:51:48.026855   58167 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:51:48.063861   58167 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 19:51:48.064915   58167 start.go:297] selected driver: kvm2
	I0906 19:51:48.064929   58167 start.go:901] validating driver "kvm2" against <nil>
	I0906 19:51:48.064939   58167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:51:48.065648   58167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:51:48.065756   58167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:51:48.081648   58167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:51:48.081690   58167 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 19:51:48.081892   58167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:51:48.081953   58167 cni.go:84] Creating CNI manager for "kindnet"
	I0906 19:51:48.081965   58167 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 19:51:48.082037   58167 start.go:340] cluster config:
	{Name:kindnet-603826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-603826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0906 19:51:48.082157   58167 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:51:48.083916   58167 out.go:177] * Starting "kindnet-603826" primary control-plane node in "kindnet-603826" cluster
	I0906 19:51:48.085072   58167 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 19:51:48.085107   58167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:51:48.085125   58167 cache.go:56] Caching tarball of preloaded images
	I0906 19:51:48.085219   58167 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:51:48.085233   58167 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 19:51:48.085347   58167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/config.json ...
	I0906 19:51:48.085380   58167 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/config.json: {Name:mkbc31c2f5d89fcc809ca76c473197d79a6361b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:51:48.085536   58167 start.go:360] acquireMachinesLock for kindnet-603826: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:51:48.085569   58167 start.go:364] duration metric: took 17.355µs to acquireMachinesLock for "kindnet-603826"
	I0906 19:51:48.085592   58167 start.go:93] Provisioning new machine with config: &{Name:kindnet-603826 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-603826 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 19:51:48.085668   58167 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 19:51:47.862224   57714 pod_ready.go:93] pod "kube-proxy-gkn5p" in "kube-system" namespace has status "Ready":"True"
	I0906 19:51:47.862251   57714 pod_ready.go:82] duration metric: took 400.656748ms for pod "kube-proxy-gkn5p" in "kube-system" namespace to be "Ready" ...
	I0906 19:51:47.862264   57714 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-306799" in "kube-system" namespace to be "Ready" ...
	I0906 19:51:48.261715   57714 pod_ready.go:93] pod "kube-scheduler-pause-306799" in "kube-system" namespace has status "Ready":"True"
	I0906 19:51:48.261738   57714 pod_ready.go:82] duration metric: took 399.466165ms for pod "kube-scheduler-pause-306799" in "kube-system" namespace to be "Ready" ...
	I0906 19:51:48.261746   57714 pod_ready.go:39] duration metric: took 2.577450358s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 19:51:48.261759   57714 api_server.go:52] waiting for apiserver process to appear ...
	I0906 19:51:48.261809   57714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:51:48.277065   57714 api_server.go:72] duration metric: took 2.797092849s to wait for apiserver process to appear ...
	I0906 19:51:48.277095   57714 api_server.go:88] waiting for apiserver healthz status ...
	I0906 19:51:48.277112   57714 api_server.go:253] Checking apiserver healthz at https://192.168.50.125:8443/healthz ...
	I0906 19:51:48.284123   57714 api_server.go:279] https://192.168.50.125:8443/healthz returned 200:
	ok
	I0906 19:51:48.285370   57714 api_server.go:141] control plane version: v1.31.0
	I0906 19:51:48.285398   57714 api_server.go:131] duration metric: took 8.295097ms to wait for apiserver health ...
	I0906 19:51:48.285407   57714 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 19:51:48.466273   57714 system_pods.go:59] 6 kube-system pods found
	I0906 19:51:48.466302   57714 system_pods.go:61] "coredns-6f6b679f8f-qb82l" [410a1028-fc4f-42ae-86d9-6c405a1468da] Running
	I0906 19:51:48.466307   57714 system_pods.go:61] "etcd-pause-306799" [4df0de72-66e2-4238-bcd0-b7ccb6eee73a] Running
	I0906 19:51:48.466311   57714 system_pods.go:61] "kube-apiserver-pause-306799" [fee6ee9c-6596-41c0-ac9c-99d56b9c4beb] Running
	I0906 19:51:48.466314   57714 system_pods.go:61] "kube-controller-manager-pause-306799" [6b7ef8c0-cd93-402d-95c1-4fcc296c52a6] Running
	I0906 19:51:48.466317   57714 system_pods.go:61] "kube-proxy-gkn5p" [14826bd0-43b5-4952-90f9-8f59ffc98e91] Running
	I0906 19:51:48.466320   57714 system_pods.go:61] "kube-scheduler-pause-306799" [59c40e8b-3cae-4a16-a8b1-eaf7d667e29a] Running
	I0906 19:51:48.466326   57714 system_pods.go:74] duration metric: took 180.912639ms to wait for pod list to return data ...
	I0906 19:51:48.466332   57714 default_sa.go:34] waiting for default service account to be created ...
	I0906 19:51:48.662528   57714 default_sa.go:45] found service account: "default"
	I0906 19:51:48.662561   57714 default_sa.go:55] duration metric: took 196.222191ms for default service account to be created ...
	I0906 19:51:48.662579   57714 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 19:51:48.863838   57714 system_pods.go:86] 6 kube-system pods found
	I0906 19:51:48.863867   57714 system_pods.go:89] "coredns-6f6b679f8f-qb82l" [410a1028-fc4f-42ae-86d9-6c405a1468da] Running
	I0906 19:51:48.863873   57714 system_pods.go:89] "etcd-pause-306799" [4df0de72-66e2-4238-bcd0-b7ccb6eee73a] Running
	I0906 19:51:48.863876   57714 system_pods.go:89] "kube-apiserver-pause-306799" [fee6ee9c-6596-41c0-ac9c-99d56b9c4beb] Running
	I0906 19:51:48.863880   57714 system_pods.go:89] "kube-controller-manager-pause-306799" [6b7ef8c0-cd93-402d-95c1-4fcc296c52a6] Running
	I0906 19:51:48.863884   57714 system_pods.go:89] "kube-proxy-gkn5p" [14826bd0-43b5-4952-90f9-8f59ffc98e91] Running
	I0906 19:51:48.863889   57714 system_pods.go:89] "kube-scheduler-pause-306799" [59c40e8b-3cae-4a16-a8b1-eaf7d667e29a] Running
	I0906 19:51:48.863898   57714 system_pods.go:126] duration metric: took 201.311721ms to wait for k8s-apps to be running ...
	I0906 19:51:48.863906   57714 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 19:51:48.863955   57714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:51:48.883163   57714 system_svc.go:56] duration metric: took 19.237095ms WaitForService to wait for kubelet
	I0906 19:51:48.883198   57714 kubeadm.go:582] duration metric: took 3.403229464s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:51:48.883221   57714 node_conditions.go:102] verifying NodePressure condition ...
	I0906 19:51:49.062389   57714 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 19:51:49.062412   57714 node_conditions.go:123] node cpu capacity is 2
	I0906 19:51:49.062422   57714 node_conditions.go:105] duration metric: took 179.196825ms to run NodePressure ...
	I0906 19:51:49.062433   57714 start.go:241] waiting for startup goroutines ...
	I0906 19:51:49.062440   57714 start.go:246] waiting for cluster config update ...
	I0906 19:51:49.062446   57714 start.go:255] writing updated cluster config ...
	I0906 19:51:49.062717   57714 ssh_runner.go:195] Run: rm -f paused
	I0906 19:51:49.110865   57714 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 19:51:49.112907   57714 out.go:177] * Done! kubectl is now configured to use "pause-306799" cluster and "default" namespace by default
	I0906 19:51:46.938709   57042 pod_ready.go:103] pod "coredns-6f6b679f8f-lzj6l" in "kube-system" namespace has status "Ready":"False"
	I0906 19:51:48.939635   57042 pod_ready.go:103] pod "coredns-6f6b679f8f-lzj6l" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.868684284Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1626c3a-08e6-401f-88de-6df79b235c57 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.869696339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7da66f5-a328-4ec2-bbd5-fdb65a8bd171 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.870090395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652311870065667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7da66f5-a328-4ec2-bbd5-fdb65a8bd171 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.871338800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f149eaec-84a0-454b-af57-1f03f96d8f06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.871402651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f149eaec-84a0-454b-af57-1f03f96d8f06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.871680080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652287924368926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652284160762737,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb
47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652284147541842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beeb
edfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725652284154566771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652284124565368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4,PodSandboxId:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725652270354782983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652271140832842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652270345362961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652270357872271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube
-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652270227338198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652270150567824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2,PodSandboxId:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652248282743368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f149eaec-84a0-454b-af57-1f03f96d8f06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.927237911Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=67cc3eed-605e-4c17-bd3e-24f0b9849f1d name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.927537456Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-qb82l,Uid:410a1028-fc4f-42ae-86d9-6c405a1468da,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652270080316895,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T19:50:48.106896455Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-306799,Uid:dbf9034173e478e72bf32beebedfc2ae,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652269899124940,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dbf9034173e478e72bf32beebedfc2ae,kubernetes.io/config.seen: 2024-09-06T19:50:42.547584474Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-306799,Uid:7325d254b7ce37e89638d8242cad943f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652269898082276,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,tier: c
ontrol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.125:8443,kubernetes.io/config.hash: 7325d254b7ce37e89638d8242cad943f,kubernetes.io/config.seen: 2024-09-06T19:50:42.547581251Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&PodSandboxMetadata{Name:etcd-pause-306799,Uid:870a6fb47feef38d1c3d0a18e2f0c5ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652269897381628,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.125:2379,kubernetes.io/config.hash: 870a6fb47feef38d1c3d0a18e2f0c5ec,kubernetes.io/config.seen: 2024-09-06T19:50:42.547576842Z,kuberne
tes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-306799,Uid:5dbae4fd4cbf227454c0750af6cecf90,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725652269865085757,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5dbae4fd4cbf227454c0750af6cecf90,kubernetes.io/config.seen: 2024-09-06T19:50:42.547583087Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&PodSandboxMetadata{Name:kube-proxy-gkn5p,Uid:14826bd0-43b5-4952-90f9-8f59ffc98e91,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1725652269842714565,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T19:50:47.843312963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&PodSandboxMetadata{Name:kube-proxy-gkn5p,Uid:14826bd0-43b5-4952-90f9-8f59ffc98e91,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1725652248157980085,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-06T19:50:47.843312963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=67cc3eed-605e-4c17-bd3e-24f0b9849f1d name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.929233692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58d4d75a-1566-4662-9cd3-340309d23c7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.929334193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58d4d75a-1566-4662-9cd3-340309d23c7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.929709351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652287924368926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652284160762737,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb
47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652284147541842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beeb
edfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725652284154566771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652284124565368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4,PodSandboxId:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725652270354782983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652271140832842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652270345362961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652270357872271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube
-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652270227338198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652270150567824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2,PodSandboxId:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652248282743368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58d4d75a-1566-4662-9cd3-340309d23c7d name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.932427422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=420b2fa8-e21a-4795-96a8-eb4f17808d7a name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.932500666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=420b2fa8-e21a-4795-96a8-eb4f17808d7a name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.933654863Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54864e2e-1545-4145-bdd8-7e9dfddaaa51 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.934450866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652311934423153,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54864e2e-1545-4145-bdd8-7e9dfddaaa51 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.938800754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88d0c12c-01b4-4092-ba4c-355b1c39e8f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.938882424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88d0c12c-01b4-4092-ba4c-355b1c39e8f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.939622684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652287924368926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652284160762737,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb
47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652284147541842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beeb
edfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725652284154566771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652284124565368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4,PodSandboxId:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725652270354782983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652271140832842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652270345362961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652270357872271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube
-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652270227338198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652270150567824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2,PodSandboxId:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652248282743368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88d0c12c-01b4-4092-ba4c-355b1c39e8f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.986661564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=172ae2c6-d035-4a63-addb-483943805a80 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.986777493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=172ae2c6-d035-4a63-addb-483943805a80 name=/runtime.v1.RuntimeService/Version
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.988712251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2ad3799-279e-4044-9740-45181dfe77c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.989127148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652311989100195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2ad3799-279e-4044-9740-45181dfe77c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.989963615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4fc0f0b-d490-4fe5-be8f-aedae701ec13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.990035839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4fc0f0b-d490-4fe5-be8f-aedae701ec13 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 19:51:51 pause-306799 crio[2086]: time="2024-09-06 19:51:51.990333037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725652287924368926,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725652284160762737,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 870a6fb
47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725652284147541842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beeb
edfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725652284154566771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotati
ons:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725652284124565368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,}
,Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4,PodSandboxId:5bce3ebbc39ecbbea52ed548e9e079b87b36cd52014951c6ecd4577c6a0b2232,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1725652270354782983,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io
.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b,PodSandboxId:3c7d09984028ef212b4b67a19f34d305fc19d56045730d397f7810593f4d93c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1725652271140832842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-qb82l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410a1028-fc4f-42ae-86d9-6c405a1468da,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40,PodSandboxId:e6c0e8cfe44a417b59d6f9a0a515e3dcd0825769609d5bb5dd8b203cf3a2cce9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1725652270345362961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubern
etes.pod.name: kube-scheduler-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbf9034173e478e72bf32beebedfc2ae,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe,PodSandboxId:3db9a6cbedd05e6cd3fba958e781593923b733c0ec1f455d18467238de04cd34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725652270357872271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube
-apiserver-pause-306799,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7325d254b7ce37e89638d8242cad943f,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561,PodSandboxId:88b45ecc0a6e09819063d734fdbcdd74b353c44567082a9dd61390ae2da2cb98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1725652270227338198,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-306799,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 870a6fb47feef38d1c3d0a18e2f0c5ec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec,PodSandboxId:9210a3470f8d9ce4838da96e7d26400d1986a3bcde2b62bf77162c215e9aa7c8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1725652270150567824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-306799,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 5dbae4fd4cbf227454c0750af6cecf90,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2,PodSandboxId:8f66cdb2628d86eed6b84d62935e25501740835f1d4db524dda2d93ea5390523,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1725652248282743368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkn5p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 14826bd0-43b5-4952-90f9-8f59ffc98e91,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4fc0f0b-d490-4fe5-be8f-aedae701ec13 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6755a00286339       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago       Running             coredns                   2                   3c7d09984028e       coredns-6f6b679f8f-qb82l
	1585f47ead716       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   27 seconds ago       Running             etcd                      2                   88b45ecc0a6e0       etcd-pause-306799
	a6aa125f02671       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   27 seconds ago       Running             kube-apiserver            2                   3db9a6cbedd05       kube-apiserver-pause-306799
	a7027ea848d25       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   27 seconds ago       Running             kube-scheduler            2                   e6c0e8cfe44a4       kube-scheduler-pause-306799
	52844a81f463b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   27 seconds ago       Running             kube-controller-manager   2                   9210a3470f8d9       kube-controller-manager-pause-306799
	ad259c357baa9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   40 seconds ago       Exited              coredns                   1                   3c7d09984028e       coredns-6f6b679f8f-qb82l
	2312d9d3305a5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   41 seconds ago       Exited              kube-apiserver            1                   3db9a6cbedd05       kube-apiserver-pause-306799
	c212965774e97       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   41 seconds ago       Running             kube-proxy                1                   5bce3ebbc39ec       kube-proxy-gkn5p
	38a62e6a740e6       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   41 seconds ago       Exited              kube-scheduler            1                   e6c0e8cfe44a4       kube-scheduler-pause-306799
	b449348d8d9f2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   41 seconds ago       Exited              etcd                      1                   88b45ecc0a6e0       etcd-pause-306799
	2e9bf79eab24a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   41 seconds ago       Exited              kube-controller-manager   1                   9210a3470f8d9       kube-controller-manager-pause-306799
	82d27dc22b28f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Exited              kube-proxy                0                   8f66cdb2628d8       kube-proxy-gkn5p
	
	
	==> coredns [6755a00286339490ee1fb393ec133b6ac4e8be4558805c48642998a567456010] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48134 - 35287 "HINFO IN 7858879936472383266.3356898823924046315. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008707979s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b] <==
	
	
	==> describe nodes <==
	Name:               pause-306799
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-306799
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=pause-306799
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T19_50_43_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 19:50:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-306799
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 19:51:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 19:51:27 +0000   Fri, 06 Sep 2024 19:50:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 19:51:27 +0000   Fri, 06 Sep 2024 19:50:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 19:51:27 +0000   Fri, 06 Sep 2024 19:50:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 19:51:27 +0000   Fri, 06 Sep 2024 19:50:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.125
	  Hostname:    pause-306799
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4a177ab557549b2bcc6bd03a8044a0a
	  System UUID:                d4a177ab-5575-49b2-bcc6-bd03a8044a0a
	  Boot ID:                    405df62b-1d51-4a93-ac57-8d3619d7e9c6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-qb82l                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     64s
	  kube-system                 etcd-pause-306799                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         71s
	  kube-system                 kube-apiserver-pause-306799             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-pause-306799    200m (10%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-gkn5p                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-scheduler-pause-306799             100m (5%)     0 (0%)      0 (0%)           0 (0%)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 63s                kube-proxy       
	  Normal  Starting                 38s                kube-proxy       
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node pause-306799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node pause-306799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s                kubelet          Node pause-306799 status is now: NodeHasSufficientPID
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeReady                69s                kubelet          Node pause-306799 status is now: NodeReady
	  Normal  RegisteredNode           66s                node-controller  Node pause-306799 event: Registered Node pause-306799 in Controller
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node pause-306799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node pause-306799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node pause-306799 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22s                node-controller  Node pause-306799 event: Registered Node pause-306799 in Controller
	
	
	==> dmesg <==
	[  +8.988377] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.059952] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070952] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.177644] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.140445] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.302799] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.313096] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +0.066264] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.295083] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +1.187280] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.913945] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +0.083637] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.848881] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +0.973208] kauditd_printk_skb: 41 callbacks suppressed
	[Sep 6 19:51] systemd-fstab-generator[2011]: Ignoring "noauto" option for root device
	[  +0.081923] kauditd_printk_skb: 49 callbacks suppressed
	[  +0.069288] systemd-fstab-generator[2023]: Ignoring "noauto" option for root device
	[  +0.185317] systemd-fstab-generator[2037]: Ignoring "noauto" option for root device
	[  +0.150454] systemd-fstab-generator[2049]: Ignoring "noauto" option for root device
	[  +0.310089] systemd-fstab-generator[2078]: Ignoring "noauto" option for root device
	[  +0.686329] systemd-fstab-generator[2199]: Ignoring "noauto" option for root device
	[  +4.263594] kauditd_printk_skb: 196 callbacks suppressed
	[  +9.632039] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +4.557138] kauditd_printk_skb: 41 callbacks suppressed
	[ +17.571243] systemd-fstab-generator[3404]: Ignoring "noauto" option for root device
	
	
	==> etcd [1585f47ead7164ea27c9811160c49f3018097ac3add5dd0687a29497b3d94a17] <==
	{"level":"info","ts":"2024-09-06T19:51:24.568886Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T19:51:24.569731Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8e41abb37b207023","initial-advertise-peer-urls":["https://192.168.50.125:2380"],"listen-peer-urls":["https://192.168.50.125:2380"],"advertise-client-urls":["https://192.168.50.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T19:51:24.568913Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-09-06T19:51:24.569314Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-06T19:51:24.569483Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40e9c4986db8cbc5","local-member-id":"8e41abb37b207023","added-peer-id":"8e41abb37b207023","added-peer-peer-urls":["https://192.168.50.125:2380"]}
	{"level":"info","ts":"2024-09-06T19:51:24.570944Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T19:51:24.572397Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-09-06T19:51:24.572661Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40e9c4986db8cbc5","local-member-id":"8e41abb37b207023","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:51:24.572742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T19:51:25.833552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:25.833666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:25.833721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgPreVoteResp from 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:25.833759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became candidate at term 4"}
	{"level":"info","ts":"2024-09-06T19:51:25.833783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgVoteResp from 8e41abb37b207023 at term 4"}
	{"level":"info","ts":"2024-09-06T19:51:25.833811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became leader at term 4"}
	{"level":"info","ts":"2024-09-06T19:51:25.833848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e41abb37b207023 elected leader 8e41abb37b207023 at term 4"}
	{"level":"info","ts":"2024-09-06T19:51:25.839674Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8e41abb37b207023","local-member-attributes":"{Name:pause-306799 ClientURLs:[https://192.168.50.125:2379]}","request-path":"/0/members/8e41abb37b207023/attributes","cluster-id":"40e9c4986db8cbc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:51:25.839775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:25.840293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:25.841122Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:25.842375Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:51:25.845696Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:25.846412Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.125:2379"}
	{"level":"info","ts":"2024-09-06T19:51:25.846511Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:51:25.846540Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561] <==
	{"level":"info","ts":"2024-09-06T19:51:11.960241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-06T19:51:11.960266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgPreVoteResp from 8e41abb37b207023 at term 2"}
	{"level":"info","ts":"2024-09-06T19:51:11.960288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became candidate at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:11.960294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 received MsgVoteResp from 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:11.960302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e41abb37b207023 became leader at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:11.960309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e41abb37b207023 elected leader 8e41abb37b207023 at term 3"}
	{"level":"info","ts":"2024-09-06T19:51:11.963509Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8e41abb37b207023","local-member-attributes":"{Name:pause-306799 ClientURLs:[https://192.168.50.125:2379]}","request-path":"/0/members/8e41abb37b207023/attributes","cluster-id":"40e9c4986db8cbc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T19:51:11.963660Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:11.966147Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:11.971021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.125:2379"}
	{"level":"info","ts":"2024-09-06T19:51:11.989229Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T19:51:11.992215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T19:51:11.992298Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T19:51:11.992851Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T19:51:11.996975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T19:51:21.756607Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-06T19:51:21.756657Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-306799","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.125:2380"],"advertise-client-urls":["https://192.168.50.125:2379"]}
	{"level":"warn","ts":"2024-09-06T19:51:21.756784Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:51:21.756862Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:51:21.758582Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.125:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-06T19:51:21.758613Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.125:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-06T19:51:21.760051Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8e41abb37b207023","current-leader-member-id":"8e41abb37b207023"}
	{"level":"info","ts":"2024-09-06T19:51:21.763793Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-09-06T19:51:21.764100Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.125:2380"}
	{"level":"info","ts":"2024-09-06T19:51:21.764119Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-306799","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.125:2380"],"advertise-client-urls":["https://192.168.50.125:2379"]}
	
	
	==> kernel <==
	 19:51:52 up 1 min,  0 users,  load average: 1.21, 0.48, 0.18
	Linux pause-306799 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe] <==
	I0906 19:51:13.665614       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0906 19:51:13.665930       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0906 19:51:13.666271       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0906 19:51:13.671909       1 controller.go:157] Shutting down quota evaluator
	I0906 19:51:13.671942       1 controller.go:176] quota evaluator worker shutdown
	I0906 19:51:13.672349       1 controller.go:176] quota evaluator worker shutdown
	I0906 19:51:13.672378       1 controller.go:176] quota evaluator worker shutdown
	I0906 19:51:13.672458       1 controller.go:176] quota evaluator worker shutdown
	I0906 19:51:13.672482       1 controller.go:176] quota evaluator worker shutdown
	E0906 19:51:14.451274       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:14.455803       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:15.451524       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:15.455750       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:16.450692       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:16.456074       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:17.450290       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:17.456141       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:18.450833       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:18.455788       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:19.450729       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:19.455854       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:20.450843       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:20.455595       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0906 19:51:21.450780       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0906 19:51:21.455541       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [a6aa125f02671c821661b562c06339628d5739c6c5a4d8f29f41dca69b8b53a2] <==
	I0906 19:51:27.273834       1 policy_source.go:224] refreshing policies
	I0906 19:51:27.310298       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0906 19:51:27.310400       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0906 19:51:27.310424       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0906 19:51:27.310520       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0906 19:51:27.310591       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0906 19:51:27.315936       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0906 19:51:27.315990       1 shared_informer.go:320] Caches are synced for configmaps
	I0906 19:51:27.321098       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0906 19:51:27.321308       1 aggregator.go:171] initial CRD sync complete...
	I0906 19:51:27.321321       1 autoregister_controller.go:144] Starting autoregister controller
	I0906 19:51:27.321327       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0906 19:51:27.321333       1 cache.go:39] Caches are synced for autoregister controller
	I0906 19:51:27.321635       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0906 19:51:27.321843       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0906 19:51:27.322194       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0906 19:51:28.110588       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0906 19:51:28.337315       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.125]
	I0906 19:51:28.338674       1 controller.go:615] quota admission added evaluator for: endpoints
	I0906 19:51:28.343589       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0906 19:51:28.891353       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0906 19:51:28.907803       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0906 19:51:28.954790       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0906 19:51:29.003091       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0906 19:51:29.016973       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec] <==
	I0906 19:51:11.722777       1 serving.go:386] Generated self-signed cert in-memory
	I0906 19:51:12.284983       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0906 19:51:12.285023       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:12.286895       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0906 19:51:12.287410       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0906 19:51:12.287551       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:51:12.287627       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [52844a81f463ba40fa35ea170cd0a35604659a8725ed98bd83613bc9b14a5111] <==
	I0906 19:51:30.571065       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0906 19:51:30.571232       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0906 19:51:30.571435       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0906 19:51:30.575239       1 shared_informer.go:320] Caches are synced for TTL
	I0906 19:51:30.576534       1 shared_informer.go:320] Caches are synced for taint
	I0906 19:51:30.577218       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0906 19:51:30.577323       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-306799"
	I0906 19:51:30.577376       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0906 19:51:30.580761       1 shared_informer.go:320] Caches are synced for persistent volume
	I0906 19:51:30.583519       1 shared_informer.go:320] Caches are synced for attach detach
	I0906 19:51:30.586331       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0906 19:51:30.588734       1 shared_informer.go:320] Caches are synced for endpoint
	I0906 19:51:30.600641       1 shared_informer.go:320] Caches are synced for job
	I0906 19:51:30.629858       1 shared_informer.go:320] Caches are synced for disruption
	I0906 19:51:30.700446       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0906 19:51:30.712985       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 19:51:30.749962       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0906 19:51:30.760510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="202.664099ms"
	I0906 19:51:30.760732       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 19:51:30.761149       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.658µs"
	I0906 19:51:31.202223       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:51:31.249319       1 shared_informer.go:320] Caches are synced for garbage collector
	I0906 19:51:31.249362       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0906 19:51:44.789764       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="17.353441ms"
	I0906 19:51:44.790489       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="131.708µs"
	
	
	==> kube-proxy [82d27dc22b28f5e41123fc7871113f3011ea1d47d068e24ac6b503c7a38001e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:50:48.519328       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:50:48.532498       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.125"]
	E0906 19:50:48.532645       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:50:48.602246       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:50:48.602444       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:50:48.602534       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:50:48.607401       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:50:48.607668       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:50:48.607682       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:50:48.613834       1 config.go:197] "Starting service config controller"
	I0906 19:50:48.613862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:50:48.613924       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:50:48.613929       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:50:48.615395       1 config.go:326] "Starting node config controller"
	I0906 19:50:48.615458       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:50:48.714344       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:50:48.714406       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:50:48.715607       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c212965774e971df742d4f237b044d14d9fa3a1de859d3f9cf46ecf37335aeb4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 19:51:12.255271       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 19:51:13.539743       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.125"]
	E0906 19:51:13.539826       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 19:51:13.627146       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 19:51:13.627230       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 19:51:13.627261       1 server_linux.go:169] "Using iptables Proxier"
	I0906 19:51:13.629693       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 19:51:13.629919       1 server.go:483] "Version info" version="v1.31.0"
	I0906 19:51:13.629951       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:13.631348       1 config.go:197] "Starting service config controller"
	I0906 19:51:13.631388       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 19:51:13.631410       1 config.go:104] "Starting endpoint slice config controller"
	I0906 19:51:13.631414       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 19:51:13.634761       1 config.go:326] "Starting node config controller"
	I0906 19:51:13.635761       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 19:51:13.732304       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 19:51:13.732359       1 shared_informer.go:320] Caches are synced for service config
	I0906 19:51:13.735922       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40] <==
	I0906 19:51:12.055924       1 serving.go:386] Generated self-signed cert in-memory
	W0906 19:51:13.549932       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:51:13.550039       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:51:13.550081       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:51:13.550111       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:51:13.584370       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 19:51:13.584635       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:13.586703       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 19:51:13.586820       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:51:13.587029       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:51:13.586847       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:51:13.687626       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0906 19:51:21.709866       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a7027ea848d25a7866d043535f4f659e3818295783aaabe2155764557f5b40e0] <==
	I0906 19:51:25.233140       1 serving.go:386] Generated self-signed cert in-memory
	W0906 19:51:27.191986       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0906 19:51:27.192033       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0906 19:51:27.192043       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0906 19:51:27.192053       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0906 19:51:27.260820       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0906 19:51:27.260862       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 19:51:27.264860       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0906 19:51:27.265056       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0906 19:51:27.265090       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0906 19:51:27.265105       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0906 19:51:27.365900       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.021421    3066 kubelet_node_status.go:72] "Attempting to register node" node="pause-306799"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: E0906 19:51:24.022240    3066 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.125:8443: connect: connection refused" node="pause-306799"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.108655    3066 scope.go:117] "RemoveContainer" containerID="2312d9d3305a539e0a708a7acb9ae49aad9e5b0531c4c032e113c719ab0a02fe"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.108785    3066 scope.go:117] "RemoveContainer" containerID="2e9bf79eab24ad383ad132691cfa37389f3f195800f0f4b46029a5a84474efec"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.110516    3066 scope.go:117] "RemoveContainer" containerID="38a62e6a740e660e8bf61a4a63ed1e03a871125b3024e835e89ee152ebdecd40"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.111539    3066 scope.go:117] "RemoveContainer" containerID="b449348d8d9f2cc84755369ed022c81947ad243806a7a93294c7e779dce89561"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: E0906 19:51:24.246447    3066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-306799?timeout=10s\": dial tcp 192.168.50.125:8443: connect: connection refused" interval="800ms"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: I0906 19:51:24.424563    3066 kubelet_node_status.go:72] "Attempting to register node" node="pause-306799"
	Sep 06 19:51:24 pause-306799 kubelet[3066]: E0906 19:51:24.425759    3066 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.125:8443: connect: connection refused" node="pause-306799"
	Sep 06 19:51:25 pause-306799 kubelet[3066]: I0906 19:51:25.228300    3066 kubelet_node_status.go:72] "Attempting to register node" node="pause-306799"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.300093    3066 kubelet_node_status.go:111] "Node was previously registered" node="pause-306799"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.300236    3066 kubelet_node_status.go:75] "Successfully registered node" node="pause-306799"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.300268    3066 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.301285    3066 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.610828    3066 apiserver.go:52] "Watching apiserver"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.641392    3066 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.702415    3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14826bd0-43b5-4952-90f9-8f59ffc98e91-lib-modules\") pod \"kube-proxy-gkn5p\" (UID: \"14826bd0-43b5-4952-90f9-8f59ffc98e91\") " pod="kube-system/kube-proxy-gkn5p"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.702557    3066 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14826bd0-43b5-4952-90f9-8f59ffc98e91-xtables-lock\") pod \"kube-proxy-gkn5p\" (UID: \"14826bd0-43b5-4952-90f9-8f59ffc98e91\") " pod="kube-system/kube-proxy-gkn5p"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: E0906 19:51:27.826245    3066 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-pause-306799\" already exists" pod="kube-system/etcd-pause-306799"
	Sep 06 19:51:27 pause-306799 kubelet[3066]: I0906 19:51:27.914518    3066 scope.go:117] "RemoveContainer" containerID="ad259c357baa97fc1369c6164631fa67680e5875b0e1e0a820d622f5f17a159b"
	Sep 06 19:51:33 pause-306799 kubelet[3066]: E0906 19:51:33.723837    3066 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652293723382398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:51:33 pause-306799 kubelet[3066]: E0906 19:51:33.723995    3066 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652293723382398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:51:34 pause-306799 kubelet[3066]: I0906 19:51:34.752443    3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 06 19:51:43 pause-306799 kubelet[3066]: E0906 19:51:43.725973    3066 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652303725357070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 19:51:43 pause-306799 kubelet[3066]: E0906 19:51:43.726088    3066 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725652303725357070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-306799 -n pause-306799
helpers_test.go:261: (dbg) Run:  kubectl --context pause-306799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (55.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (289.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-843298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-843298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m49.356824719s)

                                                
                                                
-- stdout --
	* [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:54:28.076404   65729 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:54:28.076559   65729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:54:28.076576   65729 out.go:358] Setting ErrFile to fd 2...
	I0906 19:54:28.076584   65729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:54:28.076899   65729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:54:28.077680   65729 out.go:352] Setting JSON to false
	I0906 19:54:28.079143   65729 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5817,"bootTime":1725646651,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:54:28.079234   65729 start.go:139] virtualization: kvm guest
	I0906 19:54:28.081346   65729 out.go:177] * [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:54:28.082886   65729 notify.go:220] Checking for updates...
	I0906 19:54:28.083523   65729 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:54:28.084922   65729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:54:28.086065   65729 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:54:28.087606   65729 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:54:28.090834   65729 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:54:28.092262   65729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:54:28.094418   65729 config.go:182] Loaded profile config "bridge-603826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:54:28.094563   65729 config.go:182] Loaded profile config "enable-default-cni-603826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:54:28.094695   65729 config.go:182] Loaded profile config "flannel-603826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:54:28.094819   65729 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:54:28.141936   65729 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 19:54:28.146036   65729 start.go:297] selected driver: kvm2
	I0906 19:54:28.146066   65729 start.go:901] validating driver "kvm2" against <nil>
	I0906 19:54:28.146079   65729 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:54:28.146838   65729 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:54:28.146917   65729 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 19:54:28.166768   65729 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 19:54:28.166827   65729 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 19:54:28.167127   65729 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 19:54:28.167217   65729 cni.go:84] Creating CNI manager for ""
	I0906 19:54:28.167244   65729 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:54:28.167258   65729 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 19:54:28.167328   65729 start.go:340] cluster config:
	{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:54:28.167478   65729 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 19:54:28.169143   65729 out.go:177] * Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	I0906 19:54:28.170181   65729 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 19:54:28.170234   65729 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 19:54:28.170245   65729 cache.go:56] Caching tarball of preloaded images
	I0906 19:54:28.170366   65729 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 19:54:28.170383   65729 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0906 19:54:28.170526   65729 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 19:54:28.170556   65729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json: {Name:mk2da3f8e8571ee4b6aefae1c2fec3d1e193d0cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:54:28.170768   65729 start.go:360] acquireMachinesLock for old-k8s-version-843298: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 19:54:43.902171   65729 start.go:364] duration metric: took 15.731372839s to acquireMachinesLock for "old-k8s-version-843298"
	I0906 19:54:43.902239   65729 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 19:54:43.902369   65729 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 19:54:43.904259   65729 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 19:54:43.904465   65729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:54:43.904520   65729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:54:43.926453   65729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0906 19:54:43.927068   65729 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:54:43.927854   65729 main.go:141] libmachine: Using API Version  1
	I0906 19:54:43.927890   65729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:54:43.928350   65729 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:54:43.928654   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 19:54:43.928926   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 19:54:43.929186   65729 start.go:159] libmachine.API.Create for "old-k8s-version-843298" (driver="kvm2")
	I0906 19:54:43.929221   65729 client.go:168] LocalClient.Create starting
	I0906 19:54:43.929256   65729 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 19:54:43.929293   65729 main.go:141] libmachine: Decoding PEM data...
	I0906 19:54:43.929332   65729 main.go:141] libmachine: Parsing certificate...
	I0906 19:54:43.929408   65729 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 19:54:43.929449   65729 main.go:141] libmachine: Decoding PEM data...
	I0906 19:54:43.929462   65729 main.go:141] libmachine: Parsing certificate...
	I0906 19:54:43.929492   65729 main.go:141] libmachine: Running pre-create checks...
	I0906 19:54:43.929512   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .PreCreateCheck
	I0906 19:54:43.930010   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 19:54:43.930537   65729 main.go:141] libmachine: Creating machine...
	I0906 19:54:43.930554   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .Create
	I0906 19:54:43.931204   65729 main.go:141] libmachine: (old-k8s-version-843298) Creating KVM machine...
	I0906 19:54:43.932206   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found existing default KVM network
	I0906 19:54:43.937745   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:43.933828   66260 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c9:0c:ac} reservation:<nil>}
	I0906 19:54:43.937776   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:43.935216   66260 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:53:7b:4e} reservation:<nil>}
	I0906 19:54:43.937812   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:43.936165   66260 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f5:76:c9} reservation:<nil>}
	I0906 19:54:43.937832   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:43.937549   66260 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002f9160}
	I0906 19:54:43.937846   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | created network xml: 
	I0906 19:54:43.937854   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | <network>
	I0906 19:54:43.937865   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG |   <name>mk-old-k8s-version-843298</name>
	I0906 19:54:43.937873   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG |   <dns enable='no'/>
	I0906 19:54:43.937882   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG |   
	I0906 19:54:43.937891   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0906 19:54:43.937900   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG |     <dhcp>
	I0906 19:54:43.937909   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0906 19:54:43.937924   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG |     </dhcp>
	I0906 19:54:43.937932   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG |   </ip>
	I0906 19:54:43.937941   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG |   
	I0906 19:54:43.937948   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | </network>
	I0906 19:54:43.937958   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | 
	I0906 19:54:43.945423   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | trying to create private KVM network mk-old-k8s-version-843298 192.168.72.0/24...
	I0906 19:54:44.036559   65729 main.go:141] libmachine: (old-k8s-version-843298) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298 ...
	I0906 19:54:44.036590   65729 main.go:141] libmachine: (old-k8s-version-843298) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 19:54:44.036629   65729 main.go:141] libmachine: (old-k8s-version-843298) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 19:54:44.036648   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | private KVM network mk-old-k8s-version-843298 192.168.72.0/24 created
	I0906 19:54:44.036673   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:44.033673   66260 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:54:44.328523   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:44.328400   66260 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa...
	I0906 19:54:44.542007   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:44.541897   66260 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/old-k8s-version-843298.rawdisk...
	I0906 19:54:44.542034   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Writing magic tar header
	I0906 19:54:44.542057   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Writing SSH key tar header
	I0906 19:54:44.542077   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:44.542046   66260 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298 ...
	I0906 19:54:44.542188   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298
	I0906 19:54:44.542220   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 19:54:44.542243   65729 main.go:141] libmachine: (old-k8s-version-843298) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298 (perms=drwx------)
	I0906 19:54:44.542257   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:54:44.542283   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 19:54:44.542297   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 19:54:44.542312   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Checking permissions on dir: /home/jenkins
	I0906 19:54:44.542322   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Checking permissions on dir: /home
	I0906 19:54:44.542333   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Skipping /home - not owner
	I0906 19:54:44.542347   65729 main.go:141] libmachine: (old-k8s-version-843298) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 19:54:44.542362   65729 main.go:141] libmachine: (old-k8s-version-843298) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 19:54:44.542373   65729 main.go:141] libmachine: (old-k8s-version-843298) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 19:54:44.542385   65729 main.go:141] libmachine: (old-k8s-version-843298) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 19:54:44.542397   65729 main.go:141] libmachine: (old-k8s-version-843298) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 19:54:44.542411   65729 main.go:141] libmachine: (old-k8s-version-843298) Creating domain...
	I0906 19:54:44.543342   65729 main.go:141] libmachine: (old-k8s-version-843298) define libvirt domain using xml: 
	I0906 19:54:44.543367   65729 main.go:141] libmachine: (old-k8s-version-843298) <domain type='kvm'>
	I0906 19:54:44.543379   65729 main.go:141] libmachine: (old-k8s-version-843298)   <name>old-k8s-version-843298</name>
	I0906 19:54:44.543387   65729 main.go:141] libmachine: (old-k8s-version-843298)   <memory unit='MiB'>2200</memory>
	I0906 19:54:44.543396   65729 main.go:141] libmachine: (old-k8s-version-843298)   <vcpu>2</vcpu>
	I0906 19:54:44.543404   65729 main.go:141] libmachine: (old-k8s-version-843298)   <features>
	I0906 19:54:44.543420   65729 main.go:141] libmachine: (old-k8s-version-843298)     <acpi/>
	I0906 19:54:44.543427   65729 main.go:141] libmachine: (old-k8s-version-843298)     <apic/>
	I0906 19:54:44.543435   65729 main.go:141] libmachine: (old-k8s-version-843298)     <pae/>
	I0906 19:54:44.543446   65729 main.go:141] libmachine: (old-k8s-version-843298)     
	I0906 19:54:44.543455   65729 main.go:141] libmachine: (old-k8s-version-843298)   </features>
	I0906 19:54:44.543463   65729 main.go:141] libmachine: (old-k8s-version-843298)   <cpu mode='host-passthrough'>
	I0906 19:54:44.543471   65729 main.go:141] libmachine: (old-k8s-version-843298)   
	I0906 19:54:44.543477   65729 main.go:141] libmachine: (old-k8s-version-843298)   </cpu>
	I0906 19:54:44.543485   65729 main.go:141] libmachine: (old-k8s-version-843298)   <os>
	I0906 19:54:44.543493   65729 main.go:141] libmachine: (old-k8s-version-843298)     <type>hvm</type>
	I0906 19:54:44.543502   65729 main.go:141] libmachine: (old-k8s-version-843298)     <boot dev='cdrom'/>
	I0906 19:54:44.543511   65729 main.go:141] libmachine: (old-k8s-version-843298)     <boot dev='hd'/>
	I0906 19:54:44.543535   65729 main.go:141] libmachine: (old-k8s-version-843298)     <bootmenu enable='no'/>
	I0906 19:54:44.543549   65729 main.go:141] libmachine: (old-k8s-version-843298)   </os>
	I0906 19:54:44.543567   65729 main.go:141] libmachine: (old-k8s-version-843298)   <devices>
	I0906 19:54:44.543577   65729 main.go:141] libmachine: (old-k8s-version-843298)     <disk type='file' device='cdrom'>
	I0906 19:54:44.543590   65729 main.go:141] libmachine: (old-k8s-version-843298)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/boot2docker.iso'/>
	I0906 19:54:44.543603   65729 main.go:141] libmachine: (old-k8s-version-843298)       <target dev='hdc' bus='scsi'/>
	I0906 19:54:44.543622   65729 main.go:141] libmachine: (old-k8s-version-843298)       <readonly/>
	I0906 19:54:44.543633   65729 main.go:141] libmachine: (old-k8s-version-843298)     </disk>
	I0906 19:54:44.543643   65729 main.go:141] libmachine: (old-k8s-version-843298)     <disk type='file' device='disk'>
	I0906 19:54:44.543653   65729 main.go:141] libmachine: (old-k8s-version-843298)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 19:54:44.543669   65729 main.go:141] libmachine: (old-k8s-version-843298)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/old-k8s-version-843298.rawdisk'/>
	I0906 19:54:44.543677   65729 main.go:141] libmachine: (old-k8s-version-843298)       <target dev='hda' bus='virtio'/>
	I0906 19:54:44.543685   65729 main.go:141] libmachine: (old-k8s-version-843298)     </disk>
	I0906 19:54:44.543692   65729 main.go:141] libmachine: (old-k8s-version-843298)     <interface type='network'>
	I0906 19:54:44.543702   65729 main.go:141] libmachine: (old-k8s-version-843298)       <source network='mk-old-k8s-version-843298'/>
	I0906 19:54:44.543710   65729 main.go:141] libmachine: (old-k8s-version-843298)       <model type='virtio'/>
	I0906 19:54:44.543722   65729 main.go:141] libmachine: (old-k8s-version-843298)     </interface>
	I0906 19:54:44.543730   65729 main.go:141] libmachine: (old-k8s-version-843298)     <interface type='network'>
	I0906 19:54:44.543740   65729 main.go:141] libmachine: (old-k8s-version-843298)       <source network='default'/>
	I0906 19:54:44.543747   65729 main.go:141] libmachine: (old-k8s-version-843298)       <model type='virtio'/>
	I0906 19:54:44.543755   65729 main.go:141] libmachine: (old-k8s-version-843298)     </interface>
	I0906 19:54:44.543762   65729 main.go:141] libmachine: (old-k8s-version-843298)     <serial type='pty'>
	I0906 19:54:44.543770   65729 main.go:141] libmachine: (old-k8s-version-843298)       <target port='0'/>
	I0906 19:54:44.543776   65729 main.go:141] libmachine: (old-k8s-version-843298)     </serial>
	I0906 19:54:44.543784   65729 main.go:141] libmachine: (old-k8s-version-843298)     <console type='pty'>
	I0906 19:54:44.543793   65729 main.go:141] libmachine: (old-k8s-version-843298)       <target type='serial' port='0'/>
	I0906 19:54:44.543800   65729 main.go:141] libmachine: (old-k8s-version-843298)     </console>
	I0906 19:54:44.543807   65729 main.go:141] libmachine: (old-k8s-version-843298)     <rng model='virtio'>
	I0906 19:54:44.543817   65729 main.go:141] libmachine: (old-k8s-version-843298)       <backend model='random'>/dev/random</backend>
	I0906 19:54:44.543825   65729 main.go:141] libmachine: (old-k8s-version-843298)     </rng>
	I0906 19:54:44.543833   65729 main.go:141] libmachine: (old-k8s-version-843298)     
	I0906 19:54:44.543839   65729 main.go:141] libmachine: (old-k8s-version-843298)     
	I0906 19:54:44.543847   65729 main.go:141] libmachine: (old-k8s-version-843298)   </devices>
	I0906 19:54:44.543854   65729 main.go:141] libmachine: (old-k8s-version-843298) </domain>
	I0906 19:54:44.543865   65729 main.go:141] libmachine: (old-k8s-version-843298) 
	I0906 19:54:44.548288   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:d1:3c:92 in network default
	I0906 19:54:44.548912   65729 main.go:141] libmachine: (old-k8s-version-843298) Ensuring networks are active...
	I0906 19:54:44.548933   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:44.549584   65729 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network default is active
	I0906 19:54:44.549987   65729 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network mk-old-k8s-version-843298 is active
	I0906 19:54:44.550819   65729 main.go:141] libmachine: (old-k8s-version-843298) Getting domain xml...
	I0906 19:54:44.552922   65729 main.go:141] libmachine: (old-k8s-version-843298) Creating domain...
	I0906 19:54:46.133597   65729 main.go:141] libmachine: (old-k8s-version-843298) Waiting to get IP...
	I0906 19:54:46.134537   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:46.135082   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:46.135115   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:46.135058   66260 retry.go:31] will retry after 250.223405ms: waiting for machine to come up
	I0906 19:54:46.386821   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:46.387440   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:46.387472   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:46.387355   66260 retry.go:31] will retry after 245.474985ms: waiting for machine to come up
	I0906 19:54:46.635150   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:46.635751   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:46.635779   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:46.635703   66260 retry.go:31] will retry after 328.165713ms: waiting for machine to come up
	I0906 19:54:46.965772   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:46.966128   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:46.966145   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:46.966053   66260 retry.go:31] will retry after 526.384709ms: waiting for machine to come up
	I0906 19:54:47.494004   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:47.494615   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:47.494646   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:47.494572   66260 retry.go:31] will retry after 594.927459ms: waiting for machine to come up
	I0906 19:54:48.091188   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:48.091752   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:48.091779   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:48.091727   66260 retry.go:31] will retry after 819.364786ms: waiting for machine to come up
	I0906 19:54:48.912592   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:48.913109   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:48.913138   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:48.913042   66260 retry.go:31] will retry after 902.011192ms: waiting for machine to come up
	I0906 19:54:49.816998   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:49.817602   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:49.817630   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:49.817564   66260 retry.go:31] will retry after 946.978217ms: waiting for machine to come up
	I0906 19:54:50.765646   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:50.766251   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:50.766293   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:50.766188   66260 retry.go:31] will retry after 1.187186144s: waiting for machine to come up
	I0906 19:54:52.560553   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:52.561611   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:52.561638   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:52.561572   66260 retry.go:31] will retry after 2.157865222s: waiting for machine to come up
	I0906 19:54:54.721307   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:54.721844   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:54.721871   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:54.721756   66260 retry.go:31] will retry after 2.3832993s: waiting for machine to come up
	I0906 19:54:57.106620   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:54:57.107229   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:54:57.107265   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:54:57.107186   66260 retry.go:31] will retry after 3.123797287s: waiting for machine to come up
	I0906 19:55:00.232656   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:00.233223   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:55:00.233254   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:55:00.233187   66260 retry.go:31] will retry after 2.836771705s: waiting for machine to come up
	I0906 19:55:03.073183   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:03.073726   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 19:55:03.073749   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 19:55:03.073687   66260 retry.go:31] will retry after 3.776190846s: waiting for machine to come up
	I0906 19:55:06.853335   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:06.853895   65729 main.go:141] libmachine: (old-k8s-version-843298) Found IP for machine: 192.168.72.30
	I0906 19:55:06.853928   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has current primary IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:06.853937   65729 main.go:141] libmachine: (old-k8s-version-843298) Reserving static IP address...
	I0906 19:55:06.854221   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"} in network mk-old-k8s-version-843298
	I0906 19:55:06.936500   65729 main.go:141] libmachine: (old-k8s-version-843298) Reserved static IP address: 192.168.72.30
	I0906 19:55:06.936533   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Getting to WaitForSSH function...
	I0906 19:55:06.936542   65729 main.go:141] libmachine: (old-k8s-version-843298) Waiting for SSH to be available...
	I0906 19:55:06.939920   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:06.940301   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298
	I0906 19:55:06.940329   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find defined IP address of network mk-old-k8s-version-843298 interface with MAC address 52:54:00:35:91:5e
	I0906 19:55:06.940560   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH client type: external
	I0906 19:55:06.940584   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa (-rw-------)
	I0906 19:55:06.940627   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 19:55:06.940641   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | About to run SSH command:
	I0906 19:55:06.940659   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | exit 0
	I0906 19:55:06.946432   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | SSH cmd err, output: exit status 255: 
	I0906 19:55:06.946459   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0906 19:55:06.946470   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | command : exit 0
	I0906 19:55:06.946484   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | err     : exit status 255
	I0906 19:55:06.946497   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | output  : 
	I0906 19:55:09.946688   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Getting to WaitForSSH function...
	I0906 19:55:09.949427   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:09.949924   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:09.949958   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:09.950030   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH client type: external
	I0906 19:55:09.950058   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa (-rw-------)
	I0906 19:55:09.950085   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 19:55:09.950117   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | About to run SSH command:
	I0906 19:55:09.950132   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | exit 0
	I0906 19:55:10.078073   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | SSH cmd err, output: <nil>: 
	I0906 19:55:10.078302   65729 main.go:141] libmachine: (old-k8s-version-843298) KVM machine creation complete!
	I0906 19:55:10.078700   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 19:55:10.079277   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 19:55:10.079514   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 19:55:10.079695   65729 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0906 19:55:10.079708   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetState
	I0906 19:55:10.081175   65729 main.go:141] libmachine: Detecting operating system of created instance...
	I0906 19:55:10.081192   65729 main.go:141] libmachine: Waiting for SSH to be available...
	I0906 19:55:10.081200   65729 main.go:141] libmachine: Getting to WaitForSSH function...
	I0906 19:55:10.081209   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:10.083780   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.084205   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:10.084235   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.084383   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:10.084564   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:10.084877   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:10.085096   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:10.085283   65729 main.go:141] libmachine: Using SSH client type: native
	I0906 19:55:10.085465   65729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 19:55:10.085476   65729 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0906 19:55:10.192463   65729 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:55:10.192489   65729 main.go:141] libmachine: Detecting the provisioner...
	I0906 19:55:10.192497   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:10.195263   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.195557   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:10.195583   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.195782   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:10.195982   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:10.196109   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:10.196213   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:10.196313   65729 main.go:141] libmachine: Using SSH client type: native
	I0906 19:55:10.196517   65729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 19:55:10.196531   65729 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0906 19:55:10.301700   65729 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0906 19:55:10.301804   65729 main.go:141] libmachine: found compatible host: buildroot
	I0906 19:55:10.301819   65729 main.go:141] libmachine: Provisioning with buildroot...
	I0906 19:55:10.301835   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 19:55:10.302108   65729 buildroot.go:166] provisioning hostname "old-k8s-version-843298"
	I0906 19:55:10.302135   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 19:55:10.302314   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:10.304775   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.305108   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:10.305137   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.305261   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:10.305466   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:10.305629   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:10.305770   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:10.305967   65729 main.go:141] libmachine: Using SSH client type: native
	I0906 19:55:10.306153   65729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 19:55:10.306169   65729 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-843298 && echo "old-k8s-version-843298" | sudo tee /etc/hostname
	I0906 19:55:10.426585   65729 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-843298
	
	I0906 19:55:10.426612   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:10.429289   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.429629   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:10.429654   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.429919   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:10.430116   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:10.430313   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:10.430472   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:10.430659   65729 main.go:141] libmachine: Using SSH client type: native
	I0906 19:55:10.430859   65729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 19:55:10.430885   65729 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-843298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-843298/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-843298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 19:55:10.541580   65729 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 19:55:10.541607   65729 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 19:55:10.541624   65729 buildroot.go:174] setting up certificates
	I0906 19:55:10.541634   65729 provision.go:84] configureAuth start
	I0906 19:55:10.541642   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 19:55:10.541932   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 19:55:10.544786   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.545163   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:10.545193   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.545471   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:10.547836   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.548141   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:10.548165   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.548334   65729 provision.go:143] copyHostCerts
	I0906 19:55:10.548395   65729 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 19:55:10.548411   65729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 19:55:10.548476   65729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 19:55:10.548601   65729 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 19:55:10.548621   65729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 19:55:10.548646   65729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 19:55:10.548725   65729 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 19:55:10.548733   65729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 19:55:10.548755   65729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 19:55:10.548821   65729 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-843298 san=[127.0.0.1 192.168.72.30 localhost minikube old-k8s-version-843298]
	I0906 19:55:10.837290   65729 provision.go:177] copyRemoteCerts
	I0906 19:55:10.837353   65729 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 19:55:10.837409   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:10.839998   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.840329   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:10.840359   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:10.840550   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:10.840726   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:10.840871   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:10.841018   65729 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 19:55:10.924124   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 19:55:10.949333   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0906 19:55:10.973417   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 19:55:10.999199   65729 provision.go:87] duration metric: took 457.554993ms to configureAuth
	I0906 19:55:10.999226   65729 buildroot.go:189] setting minikube options for container-runtime
	I0906 19:55:10.999380   65729 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 19:55:10.999461   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:11.002025   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.002394   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:11.002425   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.002655   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:11.002864   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:11.003038   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:11.003189   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:11.003333   65729 main.go:141] libmachine: Using SSH client type: native
	I0906 19:55:11.003507   65729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 19:55:11.003522   65729 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 19:55:11.232212   65729 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 19:55:11.232249   65729 main.go:141] libmachine: Checking connection to Docker...
	I0906 19:55:11.232260   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetURL
	I0906 19:55:11.233663   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using libvirt version 6000000
	I0906 19:55:11.235817   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.236184   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:11.236220   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.236451   65729 main.go:141] libmachine: Docker is up and running!
	I0906 19:55:11.236468   65729 main.go:141] libmachine: Reticulating splines...
	I0906 19:55:11.236476   65729 client.go:171] duration metric: took 27.307246875s to LocalClient.Create
	I0906 19:55:11.236502   65729 start.go:167] duration metric: took 27.307318248s to libmachine.API.Create "old-k8s-version-843298"
	I0906 19:55:11.236512   65729 start.go:293] postStartSetup for "old-k8s-version-843298" (driver="kvm2")
	I0906 19:55:11.236524   65729 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 19:55:11.236549   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 19:55:11.236827   65729 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 19:55:11.236851   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:11.239102   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.239455   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:11.239486   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.239606   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:11.239800   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:11.239961   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:11.240105   65729 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 19:55:11.328138   65729 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 19:55:11.332524   65729 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 19:55:11.332557   65729 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 19:55:11.332626   65729 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 19:55:11.332724   65729 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 19:55:11.332843   65729 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 19:55:11.345570   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:55:11.372222   65729 start.go:296] duration metric: took 135.696548ms for postStartSetup
	I0906 19:55:11.372293   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 19:55:11.372971   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 19:55:11.375606   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.375990   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:11.376019   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.376315   65729 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 19:55:11.376575   65729 start.go:128] duration metric: took 27.47419227s to createHost
	I0906 19:55:11.376600   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:11.378871   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.379222   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:11.379251   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.379411   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:11.379589   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:11.379724   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:11.379856   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:11.379987   65729 main.go:141] libmachine: Using SSH client type: native
	I0906 19:55:11.380175   65729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 19:55:11.380187   65729 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 19:55:11.481688   65729 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725652511.456992887
	
	I0906 19:55:11.481715   65729 fix.go:216] guest clock: 1725652511.456992887
	I0906 19:55:11.481727   65729 fix.go:229] Guest: 2024-09-06 19:55:11.456992887 +0000 UTC Remote: 2024-09-06 19:55:11.376588149 +0000 UTC m=+43.348787073 (delta=80.404738ms)
	I0906 19:55:11.481756   65729 fix.go:200] guest clock delta is within tolerance: 80.404738ms
	I0906 19:55:11.481763   65729 start.go:83] releasing machines lock for "old-k8s-version-843298", held for 27.57955527s
	I0906 19:55:11.481803   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 19:55:11.482114   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 19:55:11.485040   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.485374   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:11.485418   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.485526   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 19:55:11.486008   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 19:55:11.486190   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 19:55:11.486287   65729 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 19:55:11.486329   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:11.486419   65729 ssh_runner.go:195] Run: cat /version.json
	I0906 19:55:11.486445   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 19:55:11.489194   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.489430   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.489589   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:11.489617   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.489771   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:11.489852   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:11.489878   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:11.489959   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:11.490071   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 19:55:11.490130   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:11.490215   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 19:55:11.490289   65729 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 19:55:11.490376   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 19:55:11.490508   65729 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 19:55:11.588992   65729 ssh_runner.go:195] Run: systemctl --version
	I0906 19:55:11.598226   65729 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 19:55:11.765044   65729 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 19:55:11.771602   65729 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 19:55:11.771676   65729 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 19:55:11.789865   65729 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 19:55:11.789894   65729 start.go:495] detecting cgroup driver to use...
	I0906 19:55:11.789958   65729 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 19:55:11.813177   65729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 19:55:11.828257   65729 docker.go:217] disabling cri-docker service (if available) ...
	I0906 19:55:11.828326   65729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 19:55:11.842582   65729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 19:55:11.857099   65729 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 19:55:11.974670   65729 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 19:55:12.140449   65729 docker.go:233] disabling docker service ...
	I0906 19:55:12.140526   65729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 19:55:12.155562   65729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 19:55:12.168678   65729 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 19:55:12.304121   65729 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 19:55:12.437664   65729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 19:55:12.456690   65729 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 19:55:12.475802   65729 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 19:55:12.475859   65729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:55:12.489344   65729 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 19:55:12.489404   65729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:55:12.500918   65729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:55:12.512554   65729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 19:55:12.523800   65729 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 19:55:12.534965   65729 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 19:55:12.545126   65729 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 19:55:12.545185   65729 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 19:55:12.561932   65729 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 19:55:12.571818   65729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:55:12.694676   65729 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 19:55:12.795451   65729 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 19:55:12.795525   65729 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 19:55:12.800344   65729 start.go:563] Will wait 60s for crictl version
	I0906 19:55:12.800401   65729 ssh_runner.go:195] Run: which crictl
	I0906 19:55:12.804646   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 19:55:12.853474   65729 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 19:55:12.853567   65729 ssh_runner.go:195] Run: crio --version
	I0906 19:55:12.887717   65729 ssh_runner.go:195] Run: crio --version
	I0906 19:55:12.927892   65729 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0906 19:55:12.929083   65729 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 19:55:12.932193   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:12.932593   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 19:55:12.932631   65729 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 19:55:12.932941   65729 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0906 19:55:12.937312   65729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 19:55:12.949967   65729 kubeadm.go:883] updating cluster {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 19:55:12.950073   65729 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 19:55:12.950126   65729 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:55:12.983594   65729 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 19:55:12.983667   65729 ssh_runner.go:195] Run: which lz4
	I0906 19:55:12.987657   65729 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 19:55:12.991806   65729 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 19:55:12.991838   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0906 19:55:14.758376   65729 crio.go:462] duration metric: took 1.770757183s to copy over tarball
	I0906 19:55:14.758471   65729 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 19:55:17.544178   65729 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.785674201s)
	I0906 19:55:17.544207   65729 crio.go:469] duration metric: took 2.785788205s to extract the tarball
	I0906 19:55:17.544216   65729 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 19:55:17.599711   65729 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 19:55:17.676299   65729 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 19:55:17.676326   65729 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 19:55:17.676448   65729 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 19:55:17.676484   65729 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:55:17.676493   65729 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:55:17.676485   65729 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 19:55:17.676500   65729 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:55:17.676420   65729 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:55:17.676461   65729 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0906 19:55:17.676429   65729 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:55:17.678266   65729 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 19:55:17.678284   65729 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:55:17.678299   65729 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:55:17.678268   65729 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:55:17.678268   65729 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 19:55:17.678346   65729 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:55:17.678359   65729 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:55:17.678371   65729 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 19:55:17.841942   65729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:55:17.845591   65729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0906 19:55:17.858687   65729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:55:17.871916   65729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:55:17.880491   65729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0906 19:55:17.891908   65729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:55:17.920721   65729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 19:55:17.925277   65729 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0906 19:55:17.925334   65729 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:55:17.925378   65729 ssh_runner.go:195] Run: which crictl
	I0906 19:55:17.967117   65729 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0906 19:55:17.967159   65729 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0906 19:55:17.967222   65729 ssh_runner.go:195] Run: which crictl
	I0906 19:55:18.032733   65729 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0906 19:55:18.032778   65729 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:55:18.032785   65729 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0906 19:55:18.032814   65729 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:55:18.032828   65729 ssh_runner.go:195] Run: which crictl
	I0906 19:55:18.032870   65729 ssh_runner.go:195] Run: which crictl
	I0906 19:55:18.035075   65729 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0906 19:55:18.035113   65729 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0906 19:55:18.035147   65729 ssh_runner.go:195] Run: which crictl
	I0906 19:55:18.055002   65729 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0906 19:55:18.055051   65729 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:55:18.055112   65729 ssh_runner.go:195] Run: which crictl
	I0906 19:55:18.072190   65729 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 19:55:18.072240   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:55:18.072243   65729 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 19:55:18.072249   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 19:55:18.072272   65729 ssh_runner.go:195] Run: which crictl
	I0906 19:55:18.072289   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:55:18.072335   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 19:55:18.072303   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:55:18.072355   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:55:18.233974   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:55:18.234062   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 19:55:18.234113   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:55:18.234069   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:55:18.234135   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 19:55:18.234273   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 19:55:18.368287   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 19:55:18.368371   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:55:18.368452   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 19:55:18.368614   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 19:55:18.439301   65729 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 19:55:18.470629   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 19:55:18.470695   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 19:55:18.470813   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 19:55:18.470918   65729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0906 19:55:18.470976   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 19:55:18.470982   65729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0906 19:55:18.471105   65729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0906 19:55:18.671597   65729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0906 19:55:18.671669   65729 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 19:55:18.671689   65729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0906 19:55:18.671723   65729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0906 19:55:18.707580   65729 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 19:55:18.707649   65729 cache_images.go:92] duration metric: took 1.031298352s to LoadCachedImages
	W0906 19:55:18.707740   65729 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0906 19:55:18.707754   65729 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.20.0 crio true true} ...
	I0906 19:55:18.707856   65729 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-843298 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 19:55:18.707932   65729 ssh_runner.go:195] Run: crio config
	I0906 19:55:18.756142   65729 cni.go:84] Creating CNI manager for ""
	I0906 19:55:18.756169   65729 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 19:55:18.756191   65729 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 19:55:18.756214   65729 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-843298 NodeName:old-k8s-version-843298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 19:55:18.756390   65729 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-843298"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 19:55:18.756461   65729 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0906 19:55:18.767207   65729 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 19:55:18.767283   65729 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 19:55:18.778315   65729 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0906 19:55:18.797647   65729 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 19:55:18.814760   65729 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0906 19:55:18.833494   65729 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0906 19:55:18.838698   65729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 19:55:18.855316   65729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 19:55:19.026615   65729 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 19:55:19.045978   65729 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298 for IP: 192.168.72.30
	I0906 19:55:19.046003   65729 certs.go:194] generating shared ca certs ...
	I0906 19:55:19.046019   65729 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:55:19.046200   65729 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 19:55:19.046267   65729 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 19:55:19.046290   65729 certs.go:256] generating profile certs ...
	I0906 19:55:19.046357   65729 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key
	I0906 19:55:19.046373   65729 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.crt with IP's: []
	I0906 19:55:19.260018   65729 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.crt ...
	I0906 19:55:19.260056   65729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.crt: {Name:mke6dc4f712c7e5e8cd85e01995b611058c0d7c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:55:19.260267   65729 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key ...
	I0906 19:55:19.260294   65729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key: {Name:mk4909c7628b29b2b5a871a8256c6046576efbba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:55:19.260418   65729 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3
	I0906 19:55:19.260438   65729 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt.f5190fa3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.30]
	I0906 19:55:19.468106   65729 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt.f5190fa3 ...
	I0906 19:55:19.468152   65729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt.f5190fa3: {Name:mk62b9318e796051ca03e2b68fbc690b665dc239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:55:19.468325   65729 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3 ...
	I0906 19:55:19.468341   65729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3: {Name:mk24074bb81ca2e42f8829309c98d6ed60f63964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:55:19.468417   65729 certs.go:381] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt.f5190fa3 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt
	I0906 19:55:19.468483   65729 certs.go:385] copying /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3 -> /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key
	I0906 19:55:19.468533   65729 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key
	I0906 19:55:19.468553   65729 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt with IP's: []
	I0906 19:55:19.573789   65729 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt ...
	I0906 19:55:19.573832   65729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt: {Name:mk321c38cd194413d66067eafe7c24905c5e84f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:55:19.574051   65729 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key ...
	I0906 19:55:19.574068   65729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key: {Name:mk00d31908c4a41782ef3aa004df68f4021d983b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 19:55:19.574282   65729 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 19:55:19.574323   65729 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 19:55:19.574330   65729 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 19:55:19.574390   65729 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 19:55:19.574419   65729 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 19:55:19.574446   65729 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 19:55:19.574490   65729 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 19:55:19.575117   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 19:55:19.610686   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 19:55:19.644812   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 19:55:19.676482   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 19:55:19.703439   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 19:55:19.730096   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 19:55:19.758047   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 19:55:19.786051   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 19:55:19.813664   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 19:55:19.841953   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 19:55:19.870448   65729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 19:55:19.908616   65729 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 19:55:19.934228   65729 ssh_runner.go:195] Run: openssl version
	I0906 19:55:19.942816   65729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 19:55:19.961582   65729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 19:55:19.971447   65729 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 19:55:19.971536   65729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 19:55:19.981378   65729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 19:55:20.000436   65729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 19:55:20.022684   65729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 19:55:20.029071   65729 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 19:55:20.029142   65729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 19:55:20.036080   65729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 19:55:20.051280   65729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 19:55:20.063240   65729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:55:20.069294   65729 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:55:20.069380   65729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 19:55:20.077326   65729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 19:55:20.091901   65729 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 19:55:20.097701   65729 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 19:55:20.097770   65729 kubeadm.go:392] StartCluster: {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 19:55:20.097861   65729 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 19:55:20.097947   65729 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 19:55:20.143146   65729 cri.go:89] found id: ""
	I0906 19:55:20.143213   65729 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 19:55:20.153754   65729 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 19:55:20.166121   65729 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 19:55:20.176747   65729 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 19:55:20.176773   65729 kubeadm.go:157] found existing configuration files:
	
	I0906 19:55:20.176825   65729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 19:55:20.190006   65729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 19:55:20.190075   65729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 19:55:20.200826   65729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 19:55:20.210355   65729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 19:55:20.210411   65729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 19:55:20.220592   65729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 19:55:20.233741   65729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 19:55:20.233814   65729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 19:55:20.246447   65729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 19:55:20.258003   65729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 19:55:20.258062   65729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 19:55:20.268810   65729 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 19:55:20.591798   65729 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 19:57:19.493553   65729 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 19:57:19.493737   65729 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 19:57:19.494780   65729 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 19:57:19.494895   65729 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 19:57:19.495041   65729 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 19:57:19.495301   65729 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 19:57:19.495688   65729 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 19:57:19.495846   65729 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 19:57:19.497423   65729 out.go:235]   - Generating certificates and keys ...
	I0906 19:57:19.497521   65729 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 19:57:19.497620   65729 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 19:57:19.497762   65729 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 19:57:19.497865   65729 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 19:57:19.498002   65729 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 19:57:19.498096   65729 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 19:57:19.498219   65729 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 19:57:19.498439   65729 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-843298] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0906 19:57:19.498530   65729 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 19:57:19.498683   65729 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-843298] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0906 19:57:19.498747   65729 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 19:57:19.498802   65729 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 19:57:19.498849   65729 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 19:57:19.498902   65729 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 19:57:19.498946   65729 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 19:57:19.498990   65729 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 19:57:19.499047   65729 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 19:57:19.499094   65729 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 19:57:19.499178   65729 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 19:57:19.499252   65729 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 19:57:19.499290   65729 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 19:57:19.499348   65729 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 19:57:19.501229   65729 out.go:235]   - Booting up control plane ...
	I0906 19:57:19.501305   65729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 19:57:19.501372   65729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 19:57:19.501460   65729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 19:57:19.501572   65729 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 19:57:19.501764   65729 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 19:57:19.501824   65729 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 19:57:19.501889   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:57:19.502097   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:57:19.502184   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:57:19.502427   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:57:19.502491   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:57:19.502646   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:57:19.502708   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:57:19.502870   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:57:19.502931   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:57:19.503122   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:57:19.503134   65729 kubeadm.go:310] 
	I0906 19:57:19.503187   65729 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 19:57:19.503244   65729 kubeadm.go:310] 		timed out waiting for the condition
	I0906 19:57:19.503252   65729 kubeadm.go:310] 
	I0906 19:57:19.503304   65729 kubeadm.go:310] 	This error is likely caused by:
	I0906 19:57:19.503341   65729 kubeadm.go:310] 		- The kubelet is not running
	I0906 19:57:19.503443   65729 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 19:57:19.503454   65729 kubeadm.go:310] 
	I0906 19:57:19.503692   65729 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 19:57:19.503736   65729 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 19:57:19.503766   65729 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 19:57:19.503772   65729 kubeadm.go:310] 
	I0906 19:57:19.503865   65729 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 19:57:19.503942   65729 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 19:57:19.503952   65729 kubeadm.go:310] 
	I0906 19:57:19.504075   65729 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 19:57:19.504169   65729 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 19:57:19.504257   65729 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 19:57:19.504330   65729 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 19:57:19.504349   65729 kubeadm.go:310] 
	W0906 19:57:19.504448   65729 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-843298] and IPs [192.168.72.30 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-843298] and IPs [192.168.72.30 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-843298] and IPs [192.168.72.30 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-843298] and IPs [192.168.72.30 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 19:57:19.504503   65729 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 19:57:20.415354   65729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:57:20.429314   65729 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 19:57:20.438919   65729 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 19:57:20.438944   65729 kubeadm.go:157] found existing configuration files:
	
	I0906 19:57:20.438984   65729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 19:57:20.450281   65729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 19:57:20.450348   65729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 19:57:20.459331   65729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 19:57:20.467989   65729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 19:57:20.468033   65729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 19:57:20.477018   65729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 19:57:20.486920   65729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 19:57:20.486965   65729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 19:57:20.497254   65729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 19:57:20.507398   65729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 19:57:20.507486   65729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 19:57:20.517623   65729 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 19:57:20.592077   65729 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 19:57:20.592202   65729 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 19:57:20.744366   65729 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 19:57:20.744520   65729 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 19:57:20.744643   65729 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 19:57:20.918790   65729 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 19:57:20.920603   65729 out.go:235]   - Generating certificates and keys ...
	I0906 19:57:20.920675   65729 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 19:57:20.920727   65729 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 19:57:20.920789   65729 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 19:57:20.920837   65729 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 19:57:20.920917   65729 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 19:57:20.920965   65729 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 19:57:20.921018   65729 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 19:57:20.921066   65729 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 19:57:20.921124   65729 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 19:57:20.921188   65729 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 19:57:20.921219   65729 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 19:57:20.921275   65729 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 19:57:21.001018   65729 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 19:57:21.286237   65729 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 19:57:21.406738   65729 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 19:57:21.585836   65729 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 19:57:21.600750   65729 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 19:57:21.603026   65729 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 19:57:21.603258   65729 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 19:57:21.752726   65729 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 19:57:21.754572   65729 out.go:235]   - Booting up control plane ...
	I0906 19:57:21.754688   65729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 19:57:21.762359   65729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 19:57:21.764055   65729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 19:57:21.764886   65729 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 19:57:21.767308   65729 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 19:58:01.770408   65729 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 19:58:01.770809   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:58:01.771015   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:58:06.771736   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:58:06.771972   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:58:16.772272   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:58:16.772492   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:58:36.771591   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:58:36.771764   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:59:16.771804   65729 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 19:59:16.772014   65729 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 19:59:16.772025   65729 kubeadm.go:310] 
	I0906 19:59:16.772071   65729 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 19:59:16.772131   65729 kubeadm.go:310] 		timed out waiting for the condition
	I0906 19:59:16.772138   65729 kubeadm.go:310] 
	I0906 19:59:16.772165   65729 kubeadm.go:310] 	This error is likely caused by:
	I0906 19:59:16.772191   65729 kubeadm.go:310] 		- The kubelet is not running
	I0906 19:59:16.772282   65729 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 19:59:16.772286   65729 kubeadm.go:310] 
	I0906 19:59:16.772427   65729 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 19:59:16.772503   65729 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 19:59:16.772552   65729 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 19:59:16.772574   65729 kubeadm.go:310] 
	I0906 19:59:16.772702   65729 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 19:59:16.772826   65729 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 19:59:16.772840   65729 kubeadm.go:310] 
	I0906 19:59:16.773015   65729 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 19:59:16.773142   65729 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 19:59:16.773232   65729 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 19:59:16.773352   65729 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 19:59:16.773364   65729 kubeadm.go:310] 
	I0906 19:59:16.774074   65729 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 19:59:16.774150   65729 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 19:59:16.774203   65729 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 19:59:16.774258   65729 kubeadm.go:394] duration metric: took 3m56.676492569s to StartCluster
	I0906 19:59:16.774297   65729 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 19:59:16.774347   65729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 19:59:16.818115   65729 cri.go:89] found id: ""
	I0906 19:59:16.818143   65729 logs.go:276] 0 containers: []
	W0906 19:59:16.818151   65729 logs.go:278] No container was found matching "kube-apiserver"
	I0906 19:59:16.818158   65729 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 19:59:16.818205   65729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 19:59:16.851893   65729 cri.go:89] found id: ""
	I0906 19:59:16.851919   65729 logs.go:276] 0 containers: []
	W0906 19:59:16.851926   65729 logs.go:278] No container was found matching "etcd"
	I0906 19:59:16.851932   65729 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 19:59:16.851980   65729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 19:59:16.885337   65729 cri.go:89] found id: ""
	I0906 19:59:16.885360   65729 logs.go:276] 0 containers: []
	W0906 19:59:16.885368   65729 logs.go:278] No container was found matching "coredns"
	I0906 19:59:16.885374   65729 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 19:59:16.885420   65729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 19:59:16.919126   65729 cri.go:89] found id: ""
	I0906 19:59:16.919150   65729 logs.go:276] 0 containers: []
	W0906 19:59:16.919158   65729 logs.go:278] No container was found matching "kube-scheduler"
	I0906 19:59:16.919168   65729 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 19:59:16.919212   65729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 19:59:16.954117   65729 cri.go:89] found id: ""
	I0906 19:59:16.954141   65729 logs.go:276] 0 containers: []
	W0906 19:59:16.954149   65729 logs.go:278] No container was found matching "kube-proxy"
	I0906 19:59:16.954154   65729 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 19:59:16.954199   65729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 19:59:16.988606   65729 cri.go:89] found id: ""
	I0906 19:59:16.988629   65729 logs.go:276] 0 containers: []
	W0906 19:59:16.988636   65729 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 19:59:16.988643   65729 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 19:59:16.988690   65729 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 19:59:17.021839   65729 cri.go:89] found id: ""
	I0906 19:59:17.021865   65729 logs.go:276] 0 containers: []
	W0906 19:59:17.021873   65729 logs.go:278] No container was found matching "kindnet"
	I0906 19:59:17.021881   65729 logs.go:123] Gathering logs for describe nodes ...
	I0906 19:59:17.021893   65729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 19:59:17.135508   65729 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 19:59:17.135531   65729 logs.go:123] Gathering logs for CRI-O ...
	I0906 19:59:17.135549   65729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 19:59:17.238589   65729 logs.go:123] Gathering logs for container status ...
	I0906 19:59:17.238620   65729 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 19:59:17.289983   65729 logs.go:123] Gathering logs for kubelet ...
	I0906 19:59:17.290025   65729 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 19:59:17.346081   65729 logs.go:123] Gathering logs for dmesg ...
	I0906 19:59:17.346118   65729 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0906 19:59:17.368699   65729 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 19:59:17.368790   65729 out.go:270] * 
	* 
	W0906 19:59:17.368865   65729 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 19:59:17.368887   65729 out.go:270] * 
	* 
	W0906 19:59:17.369729   65729 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 19:59:17.373067   65729 out.go:201] 
	W0906 19:59:17.374179   65729 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 19:59:17.374218   65729 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 19:59:17.374236   65729 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 19:59:17.375525   65729 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-843298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 6 (226.060333ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:17.653651   72428 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-843298" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (289.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-504385 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-504385 --alsologtostderr -v=3: exit status 82 (2m0.4860764s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-504385"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:56:37.099099   71294 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:56:37.099229   71294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:56:37.099249   71294 out.go:358] Setting ErrFile to fd 2...
	I0906 19:56:37.099255   71294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:56:37.099435   71294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:56:37.099700   71294 out.go:352] Setting JSON to false
	I0906 19:56:37.099796   71294 mustload.go:65] Loading cluster: no-preload-504385
	I0906 19:56:37.100134   71294 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:56:37.100209   71294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/config.json ...
	I0906 19:56:37.100397   71294 mustload.go:65] Loading cluster: no-preload-504385
	I0906 19:56:37.100524   71294 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:56:37.100567   71294 stop.go:39] StopHost: no-preload-504385
	I0906 19:56:37.100995   71294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:56:37.101045   71294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:56:37.115607   71294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0906 19:56:37.116082   71294 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:56:37.116656   71294 main.go:141] libmachine: Using API Version  1
	I0906 19:56:37.116680   71294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:56:37.117087   71294 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:56:37.119748   71294 out.go:177] * Stopping node "no-preload-504385"  ...
	I0906 19:56:37.120974   71294 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 19:56:37.121014   71294 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 19:56:37.121221   71294 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 19:56:37.121246   71294 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 19:56:37.123984   71294 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 19:56:37.124392   71294 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 20:55:27 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 19:56:37.124427   71294 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 19:56:37.124554   71294 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 19:56:37.124730   71294 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 19:56:37.124888   71294 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 19:56:37.125029   71294 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 19:56:37.212913   71294 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0906 19:56:37.270786   71294 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0906 19:56:37.328921   71294 main.go:141] libmachine: Stopping "no-preload-504385"...
	I0906 19:56:37.328959   71294 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 19:56:37.330741   71294 main.go:141] libmachine: (no-preload-504385) Calling .Stop
	I0906 19:56:37.334038   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 0/120
	I0906 19:56:38.335334   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 1/120
	I0906 19:56:39.336718   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 2/120
	I0906 19:56:40.338093   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 3/120
	I0906 19:56:41.339486   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 4/120
	I0906 19:56:42.343499   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 5/120
	I0906 19:56:43.344743   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 6/120
	I0906 19:56:44.346099   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 7/120
	I0906 19:56:45.348139   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 8/120
	I0906 19:56:46.350239   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 9/120
	I0906 19:56:47.352487   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 10/120
	I0906 19:56:48.353849   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 11/120
	I0906 19:56:49.355726   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 12/120
	I0906 19:56:50.357092   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 13/120
	I0906 19:56:51.358573   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 14/120
	I0906 19:56:52.360593   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 15/120
	I0906 19:56:53.361924   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 16/120
	I0906 19:56:54.363696   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 17/120
	I0906 19:56:55.366008   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 18/120
	I0906 19:56:56.367351   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 19/120
	I0906 19:56:57.369364   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 20/120
	I0906 19:56:58.371262   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 21/120
	I0906 19:56:59.372446   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 22/120
	I0906 19:57:00.374645   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 23/120
	I0906 19:57:01.376077   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 24/120
	I0906 19:57:02.378023   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 25/120
	I0906 19:57:03.379940   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 26/120
	I0906 19:57:04.381371   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 27/120
	I0906 19:57:05.383311   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 28/120
	I0906 19:57:06.384758   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 29/120
	I0906 19:57:07.386283   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 30/120
	I0906 19:57:08.387719   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 31/120
	I0906 19:57:09.389069   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 32/120
	I0906 19:57:10.391339   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 33/120
	I0906 19:57:11.392636   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 34/120
	I0906 19:57:12.394835   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 35/120
	I0906 19:57:13.396217   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 36/120
	I0906 19:57:14.397992   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 37/120
	I0906 19:57:15.399459   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 38/120
	I0906 19:57:16.400901   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 39/120
	I0906 19:57:17.403055   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 40/120
	I0906 19:57:18.404212   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 41/120
	I0906 19:57:19.405679   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 42/120
	I0906 19:57:20.407332   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 43/120
	I0906 19:57:21.409260   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 44/120
	I0906 19:57:22.411236   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 45/120
	I0906 19:57:23.412625   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 46/120
	I0906 19:57:24.413927   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 47/120
	I0906 19:57:25.415755   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 48/120
	I0906 19:57:26.417116   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 49/120
	I0906 19:57:27.419057   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 50/120
	I0906 19:57:28.420442   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 51/120
	I0906 19:57:29.421744   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 52/120
	I0906 19:57:30.423195   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 53/120
	I0906 19:57:31.424586   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 54/120
	I0906 19:57:32.426550   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 55/120
	I0906 19:57:33.427814   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 56/120
	I0906 19:57:34.429136   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 57/120
	I0906 19:57:35.430539   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 58/120
	I0906 19:57:36.432155   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 59/120
	I0906 19:57:37.434385   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 60/120
	I0906 19:57:38.435948   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 61/120
	I0906 19:57:39.437346   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 62/120
	I0906 19:57:40.438753   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 63/120
	I0906 19:57:41.440163   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 64/120
	I0906 19:57:42.442115   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 65/120
	I0906 19:57:43.443651   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 66/120
	I0906 19:57:44.445090   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 67/120
	I0906 19:57:45.446663   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 68/120
	I0906 19:57:46.447906   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 69/120
	I0906 19:57:47.450024   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 70/120
	I0906 19:57:48.451630   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 71/120
	I0906 19:57:49.453122   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 72/120
	I0906 19:57:50.454570   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 73/120
	I0906 19:57:51.456003   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 74/120
	I0906 19:57:52.457927   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 75/120
	I0906 19:57:53.459384   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 76/120
	I0906 19:57:54.460715   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 77/120
	I0906 19:57:55.462233   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 78/120
	I0906 19:57:56.463513   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 79/120
	I0906 19:57:57.465714   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 80/120
	I0906 19:57:58.467372   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 81/120
	I0906 19:57:59.468765   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 82/120
	I0906 19:58:00.470654   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 83/120
	I0906 19:58:01.472327   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 84/120
	I0906 19:58:02.474691   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 85/120
	I0906 19:58:03.476178   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 86/120
	I0906 19:58:04.477909   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 87/120
	I0906 19:58:05.479251   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 88/120
	I0906 19:58:06.480637   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 89/120
	I0906 19:58:07.482540   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 90/120
	I0906 19:58:08.484023   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 91/120
	I0906 19:58:09.485301   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 92/120
	I0906 19:58:10.486882   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 93/120
	I0906 19:58:11.488495   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 94/120
	I0906 19:58:12.490644   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 95/120
	I0906 19:58:13.492006   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 96/120
	I0906 19:58:14.493491   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 97/120
	I0906 19:58:15.494981   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 98/120
	I0906 19:58:16.496576   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 99/120
	I0906 19:58:17.498901   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 100/120
	I0906 19:58:18.500412   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 101/120
	I0906 19:58:19.502486   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 102/120
	I0906 19:58:20.503585   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 103/120
	I0906 19:58:21.505134   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 104/120
	I0906 19:58:22.507182   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 105/120
	I0906 19:58:23.508659   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 106/120
	I0906 19:58:24.510090   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 107/120
	I0906 19:58:25.511458   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 108/120
	I0906 19:58:26.513145   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 109/120
	I0906 19:58:27.515269   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 110/120
	I0906 19:58:28.516683   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 111/120
	I0906 19:58:29.518168   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 112/120
	I0906 19:58:30.519839   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 113/120
	I0906 19:58:31.521400   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 114/120
	I0906 19:58:32.523566   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 115/120
	I0906 19:58:33.525214   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 116/120
	I0906 19:58:34.526659   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 117/120
	I0906 19:58:35.528230   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 118/120
	I0906 19:58:36.529850   71294 main.go:141] libmachine: (no-preload-504385) Waiting for machine to stop 119/120
	I0906 19:58:37.530951   71294 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0906 19:58:37.531016   71294 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0906 19:58:37.532882   71294 out.go:201] 
	W0906 19:58:37.534419   71294 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0906 19:58:37.534432   71294 out.go:270] * 
	* 
	W0906 19:58:37.538236   71294 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 19:58:37.539763   71294 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-504385 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504385 -n no-preload-504385
E0906 19:58:39.161010   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:44.283308   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504385 -n no-preload-504385: exit status 3 (18.59574872s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:58:56.137152   72036 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.184:22: connect: no route to host
	E0906 19:58:56.137171   72036 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.184:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-504385" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-458066 --alsologtostderr -v=3
E0906 19:56:57.122478   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:56:57.128908   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:56:57.140249   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:56:57.161614   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:56:57.203047   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:56:57.284540   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:56:57.446536   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:56:57.768241   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:56:58.410549   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:56:59.691911   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:02.254234   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:07.375649   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-458066 --alsologtostderr -v=3: exit status 82 (2m0.54639054s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-458066"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:56:46.063341   71428 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:56:46.063533   71428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:56:46.063576   71428 out.go:358] Setting ErrFile to fd 2...
	I0906 19:56:46.063594   71428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:56:46.063918   71428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:56:46.064273   71428 out.go:352] Setting JSON to false
	I0906 19:56:46.064428   71428 mustload.go:65] Loading cluster: embed-certs-458066
	I0906 19:56:46.065043   71428 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:56:46.065177   71428 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/config.json ...
	I0906 19:56:46.065442   71428 mustload.go:65] Loading cluster: embed-certs-458066
	I0906 19:56:46.065629   71428 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:56:46.065693   71428 stop.go:39] StopHost: embed-certs-458066
	I0906 19:56:46.066305   71428 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:56:46.066401   71428 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:56:46.087986   71428 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0906 19:56:46.088487   71428 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:56:46.089193   71428 main.go:141] libmachine: Using API Version  1
	I0906 19:56:46.089226   71428 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:56:46.089623   71428 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:56:46.092235   71428 out.go:177] * Stopping node "embed-certs-458066"  ...
	I0906 19:56:46.093338   71428 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 19:56:46.093397   71428 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 19:56:46.093684   71428 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 19:56:46.093730   71428 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 19:56:46.097884   71428 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 19:56:46.098389   71428 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 20:55:56 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 19:56:46.098411   71428 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 19:56:46.098617   71428 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 19:56:46.098799   71428 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 19:56:46.099012   71428 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 19:56:46.099170   71428 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 19:56:46.212502   71428 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0906 19:56:46.292160   71428 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0906 19:56:46.359483   71428 main.go:141] libmachine: Stopping "embed-certs-458066"...
	I0906 19:56:46.359507   71428 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 19:56:46.361251   71428 main.go:141] libmachine: (embed-certs-458066) Calling .Stop
	I0906 19:56:46.365208   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 0/120
	I0906 19:56:47.367476   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 1/120
	I0906 19:56:48.368732   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 2/120
	I0906 19:56:49.369976   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 3/120
	I0906 19:56:50.371491   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 4/120
	I0906 19:56:51.373524   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 5/120
	I0906 19:56:52.375487   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 6/120
	I0906 19:56:53.376625   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 7/120
	I0906 19:56:54.377888   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 8/120
	I0906 19:56:55.379253   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 9/120
	I0906 19:56:56.381450   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 10/120
	I0906 19:56:57.383193   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 11/120
	I0906 19:56:58.384257   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 12/120
	I0906 19:56:59.385380   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 13/120
	I0906 19:57:00.387148   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 14/120
	I0906 19:57:01.388840   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 15/120
	I0906 19:57:02.389954   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 16/120
	I0906 19:57:03.391111   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 17/120
	I0906 19:57:04.392452   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 18/120
	I0906 19:57:05.394508   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 19/120
	I0906 19:57:06.395894   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 20/120
	I0906 19:57:07.397219   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 21/120
	I0906 19:57:08.399246   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 22/120
	I0906 19:57:09.400290   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 23/120
	I0906 19:57:10.401608   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 24/120
	I0906 19:57:11.403240   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 25/120
	I0906 19:57:12.404759   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 26/120
	I0906 19:57:13.405874   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 27/120
	I0906 19:57:14.407283   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 28/120
	I0906 19:57:15.408551   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 29/120
	I0906 19:57:16.410271   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 30/120
	I0906 19:57:17.411242   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 31/120
	I0906 19:57:18.412188   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 32/120
	I0906 19:57:19.414298   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 33/120
	I0906 19:57:20.415562   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 34/120
	I0906 19:57:21.417282   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 35/120
	I0906 19:57:22.419366   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 36/120
	I0906 19:57:23.420894   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 37/120
	I0906 19:57:24.422116   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 38/120
	I0906 19:57:25.423478   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 39/120
	I0906 19:57:26.425317   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 40/120
	I0906 19:57:27.426896   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 41/120
	I0906 19:57:28.427949   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 42/120
	I0906 19:57:29.428980   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 43/120
	I0906 19:57:30.431142   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 44/120
	I0906 19:57:31.432819   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 45/120
	I0906 19:57:32.434012   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 46/120
	I0906 19:57:33.435058   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 47/120
	I0906 19:57:34.436229   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 48/120
	I0906 19:57:35.437274   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 49/120
	I0906 19:57:36.439288   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 50/120
	I0906 19:57:37.440679   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 51/120
	I0906 19:57:38.441806   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 52/120
	I0906 19:57:39.443012   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 53/120
	I0906 19:57:40.444108   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 54/120
	I0906 19:57:41.446166   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 55/120
	I0906 19:57:42.447282   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 56/120
	I0906 19:57:43.448368   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 57/120
	I0906 19:57:44.449297   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 58/120
	I0906 19:57:45.451191   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 59/120
	I0906 19:57:46.453264   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 60/120
	I0906 19:57:47.455166   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 61/120
	I0906 19:57:48.456221   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 62/120
	I0906 19:57:49.457366   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 63/120
	I0906 19:57:50.459000   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 64/120
	I0906 19:57:51.460579   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 65/120
	I0906 19:57:52.461683   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 66/120
	I0906 19:57:53.462937   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 67/120
	I0906 19:57:54.463816   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 68/120
	I0906 19:57:55.464766   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 69/120
	I0906 19:57:56.466622   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 70/120
	I0906 19:57:57.467734   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 71/120
	I0906 19:57:58.468675   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 72/120
	I0906 19:57:59.469788   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 73/120
	I0906 19:58:00.471354   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 74/120
	I0906 19:58:01.473269   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 75/120
	I0906 19:58:02.474571   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 76/120
	I0906 19:58:03.476107   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 77/120
	I0906 19:58:04.477779   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 78/120
	I0906 19:58:05.479368   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 79/120
	I0906 19:58:06.481470   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 80/120
	I0906 19:58:07.482793   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 81/120
	I0906 19:58:08.485056   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 82/120
	I0906 19:58:09.486129   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 83/120
	I0906 19:58:10.487460   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 84/120
	I0906 19:58:11.489492   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 85/120
	I0906 19:58:12.490772   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 86/120
	I0906 19:58:13.492137   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 87/120
	I0906 19:58:14.493604   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 88/120
	I0906 19:58:15.494981   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 89/120
	I0906 19:58:16.497178   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 90/120
	I0906 19:58:17.498673   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 91/120
	I0906 19:58:18.500167   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 92/120
	I0906 19:58:19.501506   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 93/120
	I0906 19:58:20.503050   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 94/120
	I0906 19:58:21.505023   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 95/120
	I0906 19:58:22.507479   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 96/120
	I0906 19:58:23.508889   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 97/120
	I0906 19:58:24.510255   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 98/120
	I0906 19:58:25.511689   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 99/120
	I0906 19:58:26.513746   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 100/120
	I0906 19:58:27.515166   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 101/120
	I0906 19:58:28.516683   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 102/120
	I0906 19:58:29.518040   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 103/120
	I0906 19:58:30.519701   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 104/120
	I0906 19:58:31.521879   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 105/120
	I0906 19:58:32.523393   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 106/120
	I0906 19:58:33.525045   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 107/120
	I0906 19:58:34.526481   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 108/120
	I0906 19:58:35.527931   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 109/120
	I0906 19:58:36.530128   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 110/120
	I0906 19:58:37.531630   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 111/120
	I0906 19:58:38.533141   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 112/120
	I0906 19:58:39.534588   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 113/120
	I0906 19:58:40.536018   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 114/120
	I0906 19:58:41.538230   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 115/120
	I0906 19:58:42.539460   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 116/120
	I0906 19:58:43.540964   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 117/120
	I0906 19:58:44.542369   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 118/120
	I0906 19:58:45.544068   71428 main.go:141] libmachine: (embed-certs-458066) Waiting for machine to stop 119/120
	I0906 19:58:46.544697   71428 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0906 19:58:46.544775   71428 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0906 19:58:46.546996   71428 out.go:201] 
	W0906 19:58:46.548364   71428 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0906 19:58:46.548384   71428 out.go:270] * 
	* 
	W0906 19:58:46.551579   71428 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 19:58:46.552935   71428 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-458066 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-458066 -n embed-certs-458066
E0906 19:58:54.525208   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-458066 -n embed-certs-458066: exit status 3 (18.542885168s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:05.097195   72099 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host
	E0906 19:59:05.097215   72099 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-458066" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-653828 --alsologtostderr -v=3
E0906 19:57:38.099468   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:53.377650   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:53.384061   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:53.395423   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:53.416794   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:53.458203   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:53.540420   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:53.701941   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:54.023653   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:54.665491   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:55.947539   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:57:58.508995   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:03.630680   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:13.872913   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:19.061171   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:34.030977   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:34.037368   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:34.048741   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:34.070110   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:34.111519   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:34.193412   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:34.354984   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:34.354984   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:34.676264   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:35.317906   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:36.599484   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-653828 --alsologtostderr -v=3: exit status 82 (2m0.524210153s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-653828"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:57:25.764411   71760 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:57:25.764674   71760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:57:25.764684   71760 out.go:358] Setting ErrFile to fd 2...
	I0906 19:57:25.764689   71760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:57:25.764843   71760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:57:25.765053   71760 out.go:352] Setting JSON to false
	I0906 19:57:25.765122   71760 mustload.go:65] Loading cluster: default-k8s-diff-port-653828
	I0906 19:57:25.765419   71760 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:57:25.765473   71760 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/config.json ...
	I0906 19:57:25.765629   71760 mustload.go:65] Loading cluster: default-k8s-diff-port-653828
	I0906 19:57:25.765725   71760 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:57:25.765758   71760 stop.go:39] StopHost: default-k8s-diff-port-653828
	I0906 19:57:25.766148   71760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:57:25.766193   71760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:57:25.782173   71760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0906 19:57:25.782599   71760 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:57:25.783171   71760 main.go:141] libmachine: Using API Version  1
	I0906 19:57:25.783193   71760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:57:25.783557   71760 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:57:25.785904   71760 out.go:177] * Stopping node "default-k8s-diff-port-653828"  ...
	I0906 19:57:25.787304   71760 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0906 19:57:25.787326   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 19:57:25.787553   71760 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0906 19:57:25.787577   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 19:57:25.790425   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 19:57:25.790851   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 20:56:35 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 19:57:25.790879   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 19:57:25.791000   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 19:57:25.791185   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 19:57:25.791340   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 19:57:25.791444   71760 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 19:57:25.889860   71760 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0906 19:57:25.976885   71760 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0906 19:57:26.042465   71760 main.go:141] libmachine: Stopping "default-k8s-diff-port-653828"...
	I0906 19:57:26.042494   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 19:57:26.044034   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Stop
	I0906 19:57:26.047350   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 0/120
	I0906 19:57:27.048880   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 1/120
	I0906 19:57:28.050111   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 2/120
	I0906 19:57:29.051484   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 3/120
	I0906 19:57:30.052934   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 4/120
	I0906 19:57:31.054954   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 5/120
	I0906 19:57:32.056392   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 6/120
	I0906 19:57:33.057791   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 7/120
	I0906 19:57:34.059467   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 8/120
	I0906 19:57:35.061198   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 9/120
	I0906 19:57:36.063320   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 10/120
	I0906 19:57:37.064653   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 11/120
	I0906 19:57:38.066049   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 12/120
	I0906 19:57:39.067454   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 13/120
	I0906 19:57:40.068941   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 14/120
	I0906 19:57:41.071043   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 15/120
	I0906 19:57:42.072505   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 16/120
	I0906 19:57:43.074237   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 17/120
	I0906 19:57:44.075897   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 18/120
	I0906 19:57:45.077250   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 19/120
	I0906 19:57:46.079452   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 20/120
	I0906 19:57:47.081356   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 21/120
	I0906 19:57:48.082792   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 22/120
	I0906 19:57:49.084372   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 23/120
	I0906 19:57:50.086099   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 24/120
	I0906 19:57:51.088119   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 25/120
	I0906 19:57:52.089488   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 26/120
	I0906 19:57:53.090796   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 27/120
	I0906 19:57:54.092948   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 28/120
	I0906 19:57:55.094282   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 29/120
	I0906 19:57:56.096624   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 30/120
	I0906 19:57:57.098157   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 31/120
	I0906 19:57:58.099698   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 32/120
	I0906 19:57:59.101621   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 33/120
	I0906 19:58:00.103146   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 34/120
	I0906 19:58:01.105197   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 35/120
	I0906 19:58:02.107397   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 36/120
	I0906 19:58:03.109105   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 37/120
	I0906 19:58:04.110508   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 38/120
	I0906 19:58:05.111980   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 39/120
	I0906 19:58:06.114222   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 40/120
	I0906 19:58:07.115642   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 41/120
	I0906 19:58:08.117095   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 42/120
	I0906 19:58:09.118719   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 43/120
	I0906 19:58:10.119949   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 44/120
	I0906 19:58:11.121996   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 45/120
	I0906 19:58:12.123234   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 46/120
	I0906 19:58:13.124719   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 47/120
	I0906 19:58:14.126029   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 48/120
	I0906 19:58:15.127398   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 49/120
	I0906 19:58:16.129548   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 50/120
	I0906 19:58:17.131947   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 51/120
	I0906 19:58:18.133485   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 52/120
	I0906 19:58:19.134870   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 53/120
	I0906 19:58:20.136212   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 54/120
	I0906 19:58:21.138228   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 55/120
	I0906 19:58:22.139517   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 56/120
	I0906 19:58:23.140877   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 57/120
	I0906 19:58:24.142224   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 58/120
	I0906 19:58:25.143601   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 59/120
	I0906 19:58:26.145857   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 60/120
	I0906 19:58:27.147115   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 61/120
	I0906 19:58:28.148643   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 62/120
	I0906 19:58:29.150158   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 63/120
	I0906 19:58:30.151558   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 64/120
	I0906 19:58:31.153712   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 65/120
	I0906 19:58:32.155033   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 66/120
	I0906 19:58:33.156486   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 67/120
	I0906 19:58:34.157930   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 68/120
	I0906 19:58:35.159463   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 69/120
	I0906 19:58:36.161826   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 70/120
	I0906 19:58:37.163317   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 71/120
	I0906 19:58:38.164666   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 72/120
	I0906 19:58:39.166006   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 73/120
	I0906 19:58:40.167391   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 74/120
	I0906 19:58:41.169013   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 75/120
	I0906 19:58:42.170770   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 76/120
	I0906 19:58:43.172159   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 77/120
	I0906 19:58:44.173713   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 78/120
	I0906 19:58:45.175010   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 79/120
	I0906 19:58:46.177563   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 80/120
	I0906 19:58:47.178951   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 81/120
	I0906 19:58:48.180243   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 82/120
	I0906 19:58:49.182134   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 83/120
	I0906 19:58:50.183396   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 84/120
	I0906 19:58:51.185092   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 85/120
	I0906 19:58:52.186621   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 86/120
	I0906 19:58:53.188039   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 87/120
	I0906 19:58:54.189539   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 88/120
	I0906 19:58:55.190918   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 89/120
	I0906 19:58:56.193308   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 90/120
	I0906 19:58:57.195291   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 91/120
	I0906 19:58:58.196677   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 92/120
	I0906 19:58:59.197963   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 93/120
	I0906 19:59:00.199616   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 94/120
	I0906 19:59:01.201630   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 95/120
	I0906 19:59:02.203004   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 96/120
	I0906 19:59:03.204332   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 97/120
	I0906 19:59:04.205636   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 98/120
	I0906 19:59:05.207031   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 99/120
	I0906 19:59:06.208338   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 100/120
	I0906 19:59:07.209734   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 101/120
	I0906 19:59:08.211136   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 102/120
	I0906 19:59:09.212456   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 103/120
	I0906 19:59:10.213909   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 104/120
	I0906 19:59:11.215782   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 105/120
	I0906 19:59:12.217452   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 106/120
	I0906 19:59:13.218827   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 107/120
	I0906 19:59:14.220179   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 108/120
	I0906 19:59:15.221597   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 109/120
	I0906 19:59:16.223826   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 110/120
	I0906 19:59:17.225634   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 111/120
	I0906 19:59:18.227725   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 112/120
	I0906 19:59:19.229118   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 113/120
	I0906 19:59:20.231625   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 114/120
	I0906 19:59:21.233571   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 115/120
	I0906 19:59:22.234956   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 116/120
	I0906 19:59:23.236637   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 117/120
	I0906 19:59:24.238085   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 118/120
	I0906 19:59:25.239548   71760 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for machine to stop 119/120
	I0906 19:59:26.240254   71760 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0906 19:59:26.240299   71760 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0906 19:59:26.242462   71760 out.go:201] 
	W0906 19:59:26.243920   71760 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0906 19:59:26.243937   71760 out.go:270] * 
	* 
	W0906 19:59:26.247126   71760 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 19:59:26.248686   71760 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-653828 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
E0906 19:59:28.493709   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:33.615689   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:39.189202   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:40.982537   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:43.857816   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828: exit status 3 (18.526957515s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:44.777237   72625 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.16:22: connect: no route to host
	E0906 19:59:44.777264   72625 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.16:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-653828" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504385 -n no-preload-504385
E0906 19:58:58.212128   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:58.218512   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:58.229861   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:58.251213   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:58.292665   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:58.374057   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:58.535869   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:58:58.857603   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504385 -n no-preload-504385: exit status 3 (3.16822237s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:58:59.305213   72161 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.184:22: connect: no route to host
	E0906 19:58:59.305248   72161 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.184:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-504385 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0906 19:58:59.499282   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:00.781495   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:03.343770   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-504385 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152508889s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.184:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-504385 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504385 -n no-preload-504385
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504385 -n no-preload-504385: exit status 3 (3.062943361s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:08.521314   72258 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.184:22: connect: no route to host
	E0906 19:59:08.521332   72258 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.184:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-504385" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-458066 -n embed-certs-458066
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-458066 -n embed-certs-458066: exit status 3 (3.16789126s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:08.265246   72228 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host
	E0906 19:59:08.265270   72228 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-458066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0906 19:59:08.465199   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-458066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152919722s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-458066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-458066 -n embed-certs-458066
E0906 19:59:15.006704   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:15.317005   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-458066 -n embed-certs-458066: exit status 3 (3.066528466s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:17.485187   72380 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host
	E0906 19:59:17.485205   72380 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.118:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-458066" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-843298 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-843298 create -f testdata/busybox.yaml: exit status 1 (41.934097ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-843298" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-843298 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 6 (221.189638ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:17.915802   72503 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-843298" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 6 (224.927738ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:18.140924   72533 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-843298" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (91.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-843298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0906 19:59:18.707173   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:23.362163   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:23.368541   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:23.380098   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:23.401517   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:23.442933   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:23.524503   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:23.686017   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:24.007575   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:24.649799   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:25.931491   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-843298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m31.343205451s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-843298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-843298 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-843298 describe deploy/metrics-server -n kube-system: exit status 1 (42.084285ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-843298" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-843298 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 6 (215.911047ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 20:00:49.745127   73098 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-843298" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (91.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828: exit status 3 (3.167639467s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:47.945224   72722 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.16:22: connect: no route to host
	E0906 19:59:47.945243   72722 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.16:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-653828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0906 19:59:49.184120   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-653828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15380295s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.16:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-653828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
E0906 19:59:55.969261   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828: exit status 3 (3.062122187s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:59:57.161285   72821 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.16:22: connect: no route to host
	E0906 19:59:57.161306   72821 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.16:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-653828" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (728.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-843298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0906 20:00:55.998882   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:01:01.120721   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:01:11.362228   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:01:12.259482   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:01:17.891082   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:01:20.364033   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:01:31.843957   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:01:42.073453   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:01:44.178682   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:01:57.122416   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:02:07.222834   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:02:12.805675   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:02:24.823841   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:02:42.286087   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:02:53.377804   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:03:21.081303   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:03:34.031756   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:03:34.727295   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:03:58.211779   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:04:01.732408   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:04:23.362119   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:04:25.915636   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:04:49.184171   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:04:51.064294   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:04:58.425171   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:05:26.128300   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:05:50.866762   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:06:18.569167   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:06:44.178886   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:06:57.122949   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:07:53.377525   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:08:34.031575   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:08:58.211895   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-843298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m4.65474208s)

                                                
                                                
-- stdout --
	* [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-843298" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 20:00:55.455816   73230 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:00:55.455933   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.455943   73230 out.go:358] Setting ErrFile to fd 2...
	I0906 20:00:55.455951   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.456141   73230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:00:55.456685   73230 out.go:352] Setting JSON to false
	I0906 20:00:55.457698   73230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6204,"bootTime":1725646651,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:00:55.457762   73230 start.go:139] virtualization: kvm guest
	I0906 20:00:55.459863   73230 out.go:177] * [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:00:55.461119   73230 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:00:55.461167   73230 notify.go:220] Checking for updates...
	I0906 20:00:55.463398   73230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:00:55.464573   73230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:00:55.465566   73230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:00:55.466605   73230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:00:55.467834   73230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:00:55.469512   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:00:55.470129   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.470183   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.484881   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0906 20:00:55.485238   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.485752   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.485776   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.486108   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.486296   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.488175   73230 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 20:00:55.489359   73230 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:00:55.489671   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.489705   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.504589   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0906 20:00:55.505047   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.505557   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.505581   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.505867   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.506018   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.541116   73230 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 20:00:55.542402   73230 start.go:297] selected driver: kvm2
	I0906 20:00:55.542423   73230 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-8
43298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.542548   73230 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:00:55.543192   73230 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.543257   73230 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:00:55.558465   73230 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:00:55.558833   73230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:00:55.558865   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:00:55.558875   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:00:55.558908   73230 start.go:340] cluster config:
	{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.559011   73230 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.561521   73230 out.go:177] * Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	I0906 20:00:55.562714   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:00:55.562760   73230 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:00:55.562773   73230 cache.go:56] Caching tarball of preloaded images
	I0906 20:00:55.562856   73230 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:00:55.562868   73230 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0906 20:00:55.562977   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:00:55.563173   73230 start.go:360] acquireMachinesLock for old-k8s-version-843298: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:04:26.558023   73230 start.go:364] duration metric: took 3m30.994815351s to acquireMachinesLock for "old-k8s-version-843298"
	I0906 20:04:26.558087   73230 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:26.558096   73230 fix.go:54] fixHost starting: 
	I0906 20:04:26.558491   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:26.558542   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:26.576511   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0906 20:04:26.576933   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:26.577434   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:04:26.577460   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:26.577794   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:26.577968   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:26.578128   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetState
	I0906 20:04:26.579640   73230 fix.go:112] recreateIfNeeded on old-k8s-version-843298: state=Stopped err=<nil>
	I0906 20:04:26.579674   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	W0906 20:04:26.579829   73230 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:26.581843   73230 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-843298" ...
	I0906 20:04:26.583194   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .Start
	I0906 20:04:26.583341   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring networks are active...
	I0906 20:04:26.584046   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network default is active
	I0906 20:04:26.584420   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network mk-old-k8s-version-843298 is active
	I0906 20:04:26.584851   73230 main.go:141] libmachine: (old-k8s-version-843298) Getting domain xml...
	I0906 20:04:26.585528   73230 main.go:141] libmachine: (old-k8s-version-843298) Creating domain...
	I0906 20:04:27.874281   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting to get IP...
	I0906 20:04:27.875189   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:27.875762   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:27.875844   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:27.875754   74166 retry.go:31] will retry after 289.364241ms: waiting for machine to come up
	I0906 20:04:28.166932   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.167349   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.167375   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.167303   74166 retry.go:31] will retry after 317.106382ms: waiting for machine to come up
	I0906 20:04:28.485664   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.486147   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.486241   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.486199   74166 retry.go:31] will retry after 401.712201ms: waiting for machine to come up
	I0906 20:04:28.890039   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.890594   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.890621   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.890540   74166 retry.go:31] will retry after 570.418407ms: waiting for machine to come up
	I0906 20:04:29.462983   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:29.463463   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:29.463489   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:29.463428   74166 retry.go:31] will retry after 696.361729ms: waiting for machine to come up
	I0906 20:04:30.161305   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:30.161829   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:30.161876   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:30.161793   74166 retry.go:31] will retry after 896.800385ms: waiting for machine to come up
	I0906 20:04:31.059799   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.060272   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.060294   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.060226   74166 retry.go:31] will retry after 841.627974ms: waiting for machine to come up
	I0906 20:04:31.903823   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.904258   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.904280   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.904238   74166 retry.go:31] will retry after 1.274018797s: waiting for machine to come up
	I0906 20:04:33.179723   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:33.180090   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:33.180133   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:33.180059   74166 retry.go:31] will retry after 1.496142841s: waiting for machine to come up
	I0906 20:04:34.678209   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:34.678697   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:34.678726   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:34.678652   74166 retry.go:31] will retry after 1.795101089s: waiting for machine to come up
	I0906 20:04:36.474937   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:36.475399   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:36.475497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:36.475351   74166 retry.go:31] will retry after 1.918728827s: waiting for machine to come up
	I0906 20:04:38.397024   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:38.397588   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:38.397617   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:38.397534   74166 retry.go:31] will retry after 3.460427722s: waiting for machine to come up
	I0906 20:04:41.860109   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:41.860612   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:41.860640   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:41.860560   74166 retry.go:31] will retry after 4.509018672s: waiting for machine to come up
	I0906 20:04:46.374128   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374599   73230 main.go:141] libmachine: (old-k8s-version-843298) Found IP for machine: 192.168.72.30
	I0906 20:04:46.374629   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has current primary IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374642   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserving static IP address...
	I0906 20:04:46.375045   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.375071   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | skip adding static IP to network mk-old-k8s-version-843298 - found existing host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"}
	I0906 20:04:46.375081   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserved static IP address: 192.168.72.30
	I0906 20:04:46.375104   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting for SSH to be available...
	I0906 20:04:46.375119   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Getting to WaitForSSH function...
	I0906 20:04:46.377497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377836   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.377883   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377956   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH client type: external
	I0906 20:04:46.377982   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa (-rw-------)
	I0906 20:04:46.378028   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:46.378044   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | About to run SSH command:
	I0906 20:04:46.378054   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | exit 0
	I0906 20:04:46.505025   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:46.505386   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 20:04:46.506031   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.508401   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.508787   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.508827   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.509092   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:04:46.509321   73230 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:46.509339   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:46.509549   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.511816   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512230   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.512265   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512436   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.512618   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512794   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512932   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.513123   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.513364   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.513378   73230 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:46.629437   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:46.629469   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629712   73230 buildroot.go:166] provisioning hostname "old-k8s-version-843298"
	I0906 20:04:46.629731   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629910   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.632226   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632620   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.632653   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632817   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.633009   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633204   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633364   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.633544   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.633758   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.633779   73230 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-843298 && echo "old-k8s-version-843298" | sudo tee /etc/hostname
	I0906 20:04:46.764241   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-843298
	
	I0906 20:04:46.764271   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.766678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767063   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.767092   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767236   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.767414   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767591   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767740   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.767874   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.768069   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.768088   73230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-843298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-843298/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-843298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:46.890399   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:46.890424   73230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:46.890461   73230 buildroot.go:174] setting up certificates
	I0906 20:04:46.890471   73230 provision.go:84] configureAuth start
	I0906 20:04:46.890479   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.890714   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.893391   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893765   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.893802   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893942   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.896173   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896505   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.896524   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896688   73230 provision.go:143] copyHostCerts
	I0906 20:04:46.896741   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:46.896756   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:46.896814   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:46.896967   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:46.896977   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:46.897008   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:46.897096   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:46.897104   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:46.897133   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:46.897193   73230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-843298 san=[127.0.0.1 192.168.72.30 localhost minikube old-k8s-version-843298]
	I0906 20:04:47.128570   73230 provision.go:177] copyRemoteCerts
	I0906 20:04:47.128627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:47.128653   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.131548   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.131952   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.131981   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.132164   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.132396   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.132571   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.132705   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.223745   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:47.249671   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0906 20:04:47.274918   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:47.300351   73230 provision.go:87] duration metric: took 409.869395ms to configureAuth
	I0906 20:04:47.300376   73230 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:47.300584   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:04:47.300673   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.303255   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303559   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.303581   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303739   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.303943   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304098   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304266   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.304407   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.304623   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.304644   73230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:47.539793   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:47.539824   73230 machine.go:96] duration metric: took 1.030489839s to provisionDockerMachine
	I0906 20:04:47.539836   73230 start.go:293] postStartSetup for "old-k8s-version-843298" (driver="kvm2")
	I0906 20:04:47.539849   73230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:47.539884   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.540193   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:47.540220   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.543190   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543482   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.543506   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543707   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.543938   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.544097   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.544243   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.633100   73230 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:47.637336   73230 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:47.637368   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:47.637459   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:47.637541   73230 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:47.637627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:47.648442   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:47.672907   73230 start.go:296] duration metric: took 133.055727ms for postStartSetup
	I0906 20:04:47.672951   73230 fix.go:56] duration metric: took 21.114855209s for fixHost
	I0906 20:04:47.672978   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.675459   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.675833   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.675863   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.676005   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.676303   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676471   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676661   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.676846   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.677056   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.677070   73230 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:47.793647   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653087.750926682
	
	I0906 20:04:47.793671   73230 fix.go:216] guest clock: 1725653087.750926682
	I0906 20:04:47.793681   73230 fix.go:229] Guest: 2024-09-06 20:04:47.750926682 +0000 UTC Remote: 2024-09-06 20:04:47.67295613 +0000 UTC m=+232.250384025 (delta=77.970552ms)
	I0906 20:04:47.793735   73230 fix.go:200] guest clock delta is within tolerance: 77.970552ms
	I0906 20:04:47.793746   73230 start.go:83] releasing machines lock for "old-k8s-version-843298", held for 21.235682628s
	I0906 20:04:47.793778   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.794059   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:47.796792   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797195   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.797229   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797425   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798019   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798230   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798314   73230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:47.798360   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.798488   73230 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:47.798509   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.801253   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801632   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.801658   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801867   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802060   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802122   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.802152   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.802210   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802318   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802460   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802504   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.802580   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802722   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.886458   73230 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:47.910204   73230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:48.055661   73230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:48.063024   73230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:48.063090   73230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:48.084749   73230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:48.084771   73230 start.go:495] detecting cgroup driver to use...
	I0906 20:04:48.084892   73230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:48.105494   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:48.123487   73230 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:48.123564   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:48.145077   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:48.161336   73230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:48.283568   73230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:48.445075   73230 docker.go:233] disabling docker service ...
	I0906 20:04:48.445146   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:48.461122   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:48.475713   73230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:48.632804   73230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:48.762550   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:48.778737   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:48.798465   73230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:04:48.798549   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.811449   73230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:48.811523   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.824192   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.835598   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.847396   73230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:48.860005   73230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:48.871802   73230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:48.871864   73230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:48.887596   73230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:48.899508   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:49.041924   73230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:49.144785   73230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:49.144885   73230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:49.150404   73230 start.go:563] Will wait 60s for crictl version
	I0906 20:04:49.150461   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:49.154726   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:49.202450   73230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:49.202557   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.235790   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.270094   73230 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0906 20:04:49.271457   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:49.274710   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275114   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:49.275139   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275475   73230 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:49.280437   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:49.293664   73230 kubeadm.go:883] updating cluster {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:49.293793   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:04:49.293842   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:49.348172   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:49.348251   73230 ssh_runner.go:195] Run: which lz4
	I0906 20:04:49.352703   73230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:49.357463   73230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:49.357501   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0906 20:04:51.190323   73230 crio.go:462] duration metric: took 1.837657617s to copy over tarball
	I0906 20:04:51.190410   73230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:54.320754   73230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.130319477s)
	I0906 20:04:54.320778   73230 crio.go:469] duration metric: took 3.130424981s to extract the tarball
	I0906 20:04:54.320785   73230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:54.388660   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:54.427475   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:54.427505   73230 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:04:54.427580   73230 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.427594   73230 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.427611   73230 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.427662   73230 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.427691   73230 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.427696   73230 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.427813   73230 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.427672   73230 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0906 20:04:54.429432   73230 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.429443   73230 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.429447   73230 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.429448   73230 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.429475   73230 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.429449   73230 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.429496   73230 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.429589   73230 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 20:04:54.603502   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.607745   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.610516   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.613580   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.616591   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.622381   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 20:04:54.636746   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.690207   73230 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0906 20:04:54.690254   73230 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.690306   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.788758   73230 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0906 20:04:54.788804   73230 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.788876   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.804173   73230 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0906 20:04:54.804228   73230 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.804273   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817005   73230 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0906 20:04:54.817056   73230 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.817074   73230 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0906 20:04:54.817101   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817122   73230 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.817138   73230 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 20:04:54.817167   73230 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 20:04:54.817202   73230 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0906 20:04:54.817213   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817220   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.817227   73230 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.817168   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817253   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817301   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.817333   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902264   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.902422   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902522   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.902569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.902602   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.902654   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:54.902708   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.061686   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.073933   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.085364   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:55.085463   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.085399   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.085610   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:55.085725   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.192872   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:55.196085   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.255204   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.288569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.291461   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0906 20:04:55.291541   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.291559   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0906 20:04:55.291726   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0906 20:04:55.500590   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0906 20:04:55.500702   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 20:04:55.500740   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0906 20:04:55.500824   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0906 20:04:55.500885   73230 cache_images.go:92] duration metric: took 1.07336017s to LoadCachedImages
	W0906 20:04:55.500953   73230 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0906 20:04:55.500969   73230 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.20.0 crio true true} ...
	I0906 20:04:55.501112   73230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-843298 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:55.501192   73230 ssh_runner.go:195] Run: crio config
	I0906 20:04:55.554097   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:04:55.554119   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:55.554135   73230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:55.554154   73230 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-843298 NodeName:old-k8s-version-843298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 20:04:55.554359   73230 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-843298"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:55.554441   73230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0906 20:04:55.565923   73230 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:55.566004   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:55.577366   73230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0906 20:04:55.595470   73230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:55.614641   73230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0906 20:04:55.637739   73230 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:55.642233   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:55.658409   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:55.804327   73230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:55.824288   73230 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298 for IP: 192.168.72.30
	I0906 20:04:55.824308   73230 certs.go:194] generating shared ca certs ...
	I0906 20:04:55.824323   73230 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:55.824479   73230 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:55.824541   73230 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:55.824560   73230 certs.go:256] generating profile certs ...
	I0906 20:04:55.824680   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key
	I0906 20:04:55.824755   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3
	I0906 20:04:55.824799   73230 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key
	I0906 20:04:55.824952   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:55.824995   73230 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:55.825008   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:55.825041   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:55.825072   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:55.825102   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:55.825158   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:55.825878   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:55.868796   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:55.905185   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:55.935398   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:55.973373   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 20:04:56.008496   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:04:56.046017   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:56.080049   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:56.122717   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:56.151287   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:56.184273   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:56.216780   73230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:56.239708   73230 ssh_runner.go:195] Run: openssl version
	I0906 20:04:56.246127   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:56.257597   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262515   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262594   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.269207   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:56.281646   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:56.293773   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299185   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299255   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.305740   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:56.319060   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:56.330840   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336013   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336082   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.342576   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:56.354648   73230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:56.359686   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:56.366321   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:56.372646   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:56.379199   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:56.386208   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:56.392519   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:56.399335   73230 kubeadm.go:392] StartCluster: {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:56.399442   73230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:56.399495   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.441986   73230 cri.go:89] found id: ""
	I0906 20:04:56.442069   73230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:56.454884   73230 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:56.454907   73230 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:56.454977   73230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:56.465647   73230 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:56.466650   73230 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:04:56.467285   73230 kubeconfig.go:62] /home/jenkins/minikube-integration/19576-6021/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-843298" cluster setting kubeconfig missing "old-k8s-version-843298" context setting]
	I0906 20:04:56.468248   73230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:56.565587   73230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:56.576221   73230 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.30
	I0906 20:04:56.576261   73230 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:56.576277   73230 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:56.576342   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.621597   73230 cri.go:89] found id: ""
	I0906 20:04:56.621663   73230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:56.639924   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:56.649964   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:56.649989   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:56.650042   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:56.661290   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:56.661343   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:56.671361   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:56.680865   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:56.680939   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:56.696230   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.706613   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:56.706692   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.719635   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:56.729992   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:56.730045   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:56.740040   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:56.750666   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:56.891897   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.681824   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.972206   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.091751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.206345   73230 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:58.206443   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:58.707412   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.206780   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.707273   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:00.207218   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:00.707010   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.206708   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.707125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.207349   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.706670   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.207287   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.706650   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.207125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.707193   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:05.207119   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:05.707351   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.206573   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.707452   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.206554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.706854   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.206925   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.707456   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.207200   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.706741   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:10.206605   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:10.706506   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.207411   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.707316   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.207239   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.706502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.206560   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.706593   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.207192   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.706940   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:15.207250   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:15.706728   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.207477   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.707337   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.206710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.707209   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.206544   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.707104   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.206752   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.706561   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:20.206507   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:20.706855   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.206585   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.706948   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.207150   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.706508   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.207459   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.706894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.206643   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.707208   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:25.206797   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:25.706669   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.206691   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.707336   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.206666   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.706715   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.206488   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.706489   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.207461   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.707293   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:30.206591   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:30.707091   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.207070   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.707224   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.207295   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.707195   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.207373   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.707519   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.207428   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.706808   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:35.207396   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:35.707415   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.206955   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.706868   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.206515   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.706659   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.206735   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.706915   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.207300   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.707211   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:40.207085   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:40.706720   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.206896   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.707281   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.206751   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.706754   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.206987   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.707245   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.207502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.707112   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:45.206569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:45.707450   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.207446   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.707006   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.206484   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.707168   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.207536   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.707554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.206894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.706709   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:50.206799   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:50.707012   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.206914   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.706917   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.207465   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.706682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.206565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.706757   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.206600   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.706926   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:55.207382   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:55.707103   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.206621   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.707156   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.207277   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.706568   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:58.206599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:05:58.206698   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:05:58.245828   73230 cri.go:89] found id: ""
	I0906 20:05:58.245857   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.245868   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:05:58.245875   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:05:58.245938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:05:58.283189   73230 cri.go:89] found id: ""
	I0906 20:05:58.283217   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.283228   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:05:58.283235   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:05:58.283303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:05:58.320834   73230 cri.go:89] found id: ""
	I0906 20:05:58.320868   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.320880   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:05:58.320889   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:05:58.320944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:05:58.356126   73230 cri.go:89] found id: ""
	I0906 20:05:58.356152   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.356162   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:05:58.356169   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:05:58.356227   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:05:58.395951   73230 cri.go:89] found id: ""
	I0906 20:05:58.395977   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.395987   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:05:58.395994   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:05:58.396061   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:05:58.431389   73230 cri.go:89] found id: ""
	I0906 20:05:58.431415   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.431426   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:05:58.431433   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:05:58.431511   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:05:58.466255   73230 cri.go:89] found id: ""
	I0906 20:05:58.466285   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.466294   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:05:58.466300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:05:58.466356   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:05:58.505963   73230 cri.go:89] found id: ""
	I0906 20:05:58.505989   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.505997   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:05:58.506006   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:05:58.506018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:05:58.579027   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:05:58.579061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:05:58.620332   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:05:58.620365   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:05:58.675017   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:05:58.675052   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:05:58.689944   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:05:58.689970   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:05:58.825396   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:01.326375   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:01.340508   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:01.340570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:01.375429   73230 cri.go:89] found id: ""
	I0906 20:06:01.375460   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.375470   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:01.375478   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:01.375539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:01.410981   73230 cri.go:89] found id: ""
	I0906 20:06:01.411008   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.411019   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:01.411026   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:01.411083   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:01.448925   73230 cri.go:89] found id: ""
	I0906 20:06:01.448957   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.448968   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:01.448975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:01.449040   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:01.492063   73230 cri.go:89] found id: ""
	I0906 20:06:01.492094   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.492104   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:01.492112   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:01.492181   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:01.557779   73230 cri.go:89] found id: ""
	I0906 20:06:01.557812   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.557823   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:01.557830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:01.557892   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:01.604397   73230 cri.go:89] found id: ""
	I0906 20:06:01.604424   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.604432   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:01.604437   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:01.604482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:01.642249   73230 cri.go:89] found id: ""
	I0906 20:06:01.642280   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.642292   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:01.642300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:01.642364   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:01.692434   73230 cri.go:89] found id: ""
	I0906 20:06:01.692462   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.692474   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:01.692483   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:01.692498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:01.705860   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:01.705884   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:01.783929   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:01.783954   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:01.783965   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:01.864347   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:01.864385   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:01.902284   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:01.902311   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:04.456090   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:04.469775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:04.469840   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:04.505742   73230 cri.go:89] found id: ""
	I0906 20:06:04.505769   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.505778   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:04.505783   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:04.505835   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:04.541787   73230 cri.go:89] found id: ""
	I0906 20:06:04.541811   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.541819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:04.541824   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:04.541874   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:04.578775   73230 cri.go:89] found id: ""
	I0906 20:06:04.578806   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.578817   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:04.578825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:04.578885   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:04.614505   73230 cri.go:89] found id: ""
	I0906 20:06:04.614533   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.614542   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:04.614548   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:04.614594   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:04.652988   73230 cri.go:89] found id: ""
	I0906 20:06:04.653016   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.653027   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:04.653035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:04.653104   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:04.692380   73230 cri.go:89] found id: ""
	I0906 20:06:04.692408   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.692416   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:04.692423   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:04.692478   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:04.729846   73230 cri.go:89] found id: ""
	I0906 20:06:04.729869   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.729880   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:04.729887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:04.729953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:04.766341   73230 cri.go:89] found id: ""
	I0906 20:06:04.766370   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.766379   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:04.766390   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:04.766405   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:04.779801   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:04.779828   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:04.855313   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:04.855334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:04.855346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:04.934210   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:04.934246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:04.975589   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:04.975621   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:07.528622   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:07.544085   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:07.544156   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:07.588106   73230 cri.go:89] found id: ""
	I0906 20:06:07.588139   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.588149   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:07.588157   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:07.588210   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:07.630440   73230 cri.go:89] found id: ""
	I0906 20:06:07.630476   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.630494   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:07.630500   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:07.630551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:07.668826   73230 cri.go:89] found id: ""
	I0906 20:06:07.668870   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.668889   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:07.668898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:07.668962   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:07.706091   73230 cri.go:89] found id: ""
	I0906 20:06:07.706118   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.706130   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:07.706138   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:07.706196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:07.741679   73230 cri.go:89] found id: ""
	I0906 20:06:07.741708   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.741719   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:07.741726   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:07.741792   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:07.778240   73230 cri.go:89] found id: ""
	I0906 20:06:07.778277   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.778288   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:07.778296   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:07.778352   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:07.813183   73230 cri.go:89] found id: ""
	I0906 20:06:07.813212   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.813224   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:07.813232   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:07.813294   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:07.853938   73230 cri.go:89] found id: ""
	I0906 20:06:07.853970   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.853980   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:07.853988   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:07.854001   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:07.893540   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:07.893567   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:07.944219   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:07.944262   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:07.959601   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:07.959635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:08.034487   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:08.034513   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:08.034529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:10.611413   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:10.625273   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:10.625353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:10.664568   73230 cri.go:89] found id: ""
	I0906 20:06:10.664597   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.664609   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:10.664617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:10.664680   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:10.702743   73230 cri.go:89] found id: ""
	I0906 20:06:10.702772   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.702783   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:10.702790   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:10.702850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:10.739462   73230 cri.go:89] found id: ""
	I0906 20:06:10.739487   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.739504   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:10.739511   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:10.739572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:10.776316   73230 cri.go:89] found id: ""
	I0906 20:06:10.776344   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.776355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:10.776362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:10.776420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:10.809407   73230 cri.go:89] found id: ""
	I0906 20:06:10.809440   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.809451   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:10.809459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:10.809519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:10.844736   73230 cri.go:89] found id: ""
	I0906 20:06:10.844765   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.844777   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:10.844784   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:10.844851   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:10.880658   73230 cri.go:89] found id: ""
	I0906 20:06:10.880685   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.880693   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:10.880698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:10.880753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:10.917032   73230 cri.go:89] found id: ""
	I0906 20:06:10.917063   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.917074   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:10.917085   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:10.917100   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:10.980241   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:10.980272   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:10.995389   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:10.995435   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:11.070285   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:11.070313   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:11.070328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:11.155574   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:11.155607   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:13.703712   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:13.718035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:13.718093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:13.753578   73230 cri.go:89] found id: ""
	I0906 20:06:13.753603   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.753611   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:13.753617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:13.753659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:13.790652   73230 cri.go:89] found id: ""
	I0906 20:06:13.790681   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.790691   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:13.790697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:13.790749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:13.824243   73230 cri.go:89] found id: ""
	I0906 20:06:13.824278   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.824288   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:13.824293   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:13.824342   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:13.859647   73230 cri.go:89] found id: ""
	I0906 20:06:13.859691   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.859702   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:13.859721   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:13.859781   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:13.897026   73230 cri.go:89] found id: ""
	I0906 20:06:13.897061   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.897068   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:13.897075   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:13.897131   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:13.933904   73230 cri.go:89] found id: ""
	I0906 20:06:13.933927   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.933935   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:13.933941   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:13.933986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:13.969168   73230 cri.go:89] found id: ""
	I0906 20:06:13.969198   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.969210   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:13.969218   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:13.969295   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:14.005808   73230 cri.go:89] found id: ""
	I0906 20:06:14.005838   73230 logs.go:276] 0 containers: []
	W0906 20:06:14.005849   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:14.005862   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:14.005878   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:14.060878   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:14.060915   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:14.075388   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:14.075414   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:14.144942   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:14.144966   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:14.144981   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:14.233088   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:14.233139   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:16.776744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:16.790292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:16.790384   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:16.828877   73230 cri.go:89] found id: ""
	I0906 20:06:16.828910   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.828921   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:16.828929   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:16.829016   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:16.864413   73230 cri.go:89] found id: ""
	I0906 20:06:16.864440   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.864449   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:16.864455   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:16.864525   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:16.908642   73230 cri.go:89] found id: ""
	I0906 20:06:16.908676   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.908687   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:16.908694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:16.908748   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:16.952247   73230 cri.go:89] found id: ""
	I0906 20:06:16.952278   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.952286   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:16.952292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:16.952343   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:16.990986   73230 cri.go:89] found id: ""
	I0906 20:06:16.991013   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.991022   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:16.991028   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:16.991077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:17.031002   73230 cri.go:89] found id: ""
	I0906 20:06:17.031034   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.031045   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:17.031052   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:17.031114   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:17.077533   73230 cri.go:89] found id: ""
	I0906 20:06:17.077560   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.077572   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:17.077579   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:17.077646   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:17.116770   73230 cri.go:89] found id: ""
	I0906 20:06:17.116798   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.116806   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:17.116817   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:17.116834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.169300   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:17.169337   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:17.184266   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:17.184299   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:17.266371   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:17.266400   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:17.266419   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:17.343669   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:17.343698   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:19.886541   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:19.899891   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:19.899951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:19.946592   73230 cri.go:89] found id: ""
	I0906 20:06:19.946621   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.946630   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:19.946636   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:19.946686   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:19.981758   73230 cri.go:89] found id: ""
	I0906 20:06:19.981788   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.981797   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:19.981802   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:19.981854   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:20.018372   73230 cri.go:89] found id: ""
	I0906 20:06:20.018397   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.018405   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:20.018411   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:20.018460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:20.054380   73230 cri.go:89] found id: ""
	I0906 20:06:20.054428   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.054440   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:20.054449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:20.054521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:20.092343   73230 cri.go:89] found id: ""
	I0906 20:06:20.092376   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.092387   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:20.092395   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:20.092463   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:20.128568   73230 cri.go:89] found id: ""
	I0906 20:06:20.128594   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.128604   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:20.128610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:20.128657   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:20.166018   73230 cri.go:89] found id: ""
	I0906 20:06:20.166046   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.166057   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:20.166072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:20.166125   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:20.203319   73230 cri.go:89] found id: ""
	I0906 20:06:20.203347   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.203355   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:20.203365   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:20.203381   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:20.287217   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:20.287243   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:20.287259   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:20.372799   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:20.372834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:20.416595   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:20.416620   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:20.468340   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:20.468378   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:22.983259   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:22.997014   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:22.997098   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:23.034483   73230 cri.go:89] found id: ""
	I0906 20:06:23.034513   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.034524   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:23.034531   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:23.034597   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:23.072829   73230 cri.go:89] found id: ""
	I0906 20:06:23.072867   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.072878   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:23.072885   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:23.072949   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:23.110574   73230 cri.go:89] found id: ""
	I0906 20:06:23.110602   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.110613   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:23.110620   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:23.110684   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:23.149506   73230 cri.go:89] found id: ""
	I0906 20:06:23.149538   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.149550   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:23.149557   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:23.149619   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:23.191321   73230 cri.go:89] found id: ""
	I0906 20:06:23.191355   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.191367   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:23.191374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:23.191441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:23.233737   73230 cri.go:89] found id: ""
	I0906 20:06:23.233770   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.233791   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:23.233800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:23.233873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:23.270013   73230 cri.go:89] found id: ""
	I0906 20:06:23.270048   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.270060   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:23.270068   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:23.270127   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:23.309517   73230 cri.go:89] found id: ""
	I0906 20:06:23.309541   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.309549   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:23.309566   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:23.309578   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:23.380645   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:23.380675   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:23.380690   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:23.463656   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:23.463696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:23.504100   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:23.504134   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:23.557438   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:23.557483   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:26.074045   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:26.088006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:26.088072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:26.124445   73230 cri.go:89] found id: ""
	I0906 20:06:26.124469   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.124476   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:26.124482   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:26.124537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:26.158931   73230 cri.go:89] found id: ""
	I0906 20:06:26.158957   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.158968   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:26.158975   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:26.159035   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:26.197125   73230 cri.go:89] found id: ""
	I0906 20:06:26.197154   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.197164   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:26.197171   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:26.197234   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:26.233241   73230 cri.go:89] found id: ""
	I0906 20:06:26.233278   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.233291   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:26.233300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:26.233366   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:26.269910   73230 cri.go:89] found id: ""
	I0906 20:06:26.269943   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.269955   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:26.269962   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:26.270026   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:26.308406   73230 cri.go:89] found id: ""
	I0906 20:06:26.308439   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.308450   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:26.308459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:26.308521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:26.344248   73230 cri.go:89] found id: ""
	I0906 20:06:26.344276   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.344288   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:26.344295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:26.344353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:26.391794   73230 cri.go:89] found id: ""
	I0906 20:06:26.391827   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.391840   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:26.391851   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:26.391866   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:26.444192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:26.444231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:26.459113   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:26.459144   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:26.533920   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:26.533945   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:26.533960   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:26.616382   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:26.616416   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:29.160429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:29.175007   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:29.175063   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:29.212929   73230 cri.go:89] found id: ""
	I0906 20:06:29.212961   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.212972   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:29.212980   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:29.213042   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:29.250777   73230 cri.go:89] found id: ""
	I0906 20:06:29.250806   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.250815   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:29.250821   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:29.250870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:29.292222   73230 cri.go:89] found id: ""
	I0906 20:06:29.292253   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.292262   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:29.292268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:29.292331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:29.328379   73230 cri.go:89] found id: ""
	I0906 20:06:29.328413   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.328431   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:29.328436   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:29.328482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:29.366792   73230 cri.go:89] found id: ""
	I0906 20:06:29.366822   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.366834   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:29.366841   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:29.366903   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:29.402233   73230 cri.go:89] found id: ""
	I0906 20:06:29.402261   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.402270   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:29.402276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:29.402331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:29.436695   73230 cri.go:89] found id: ""
	I0906 20:06:29.436724   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.436731   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:29.436736   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:29.436787   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:29.473050   73230 cri.go:89] found id: ""
	I0906 20:06:29.473074   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.473082   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:29.473091   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:29.473101   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:29.524981   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:29.525018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:29.538698   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:29.538722   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:29.611026   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:29.611049   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:29.611064   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:29.686898   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:29.686931   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:32.228399   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:32.244709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:32.244775   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:32.285681   73230 cri.go:89] found id: ""
	I0906 20:06:32.285713   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.285724   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:32.285732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:32.285794   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:32.325312   73230 cri.go:89] found id: ""
	I0906 20:06:32.325340   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.325349   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:32.325355   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:32.325400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:32.361420   73230 cri.go:89] found id: ""
	I0906 20:06:32.361455   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.361468   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:32.361477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:32.361543   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:32.398881   73230 cri.go:89] found id: ""
	I0906 20:06:32.398956   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.398971   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:32.398984   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:32.399041   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:32.435336   73230 cri.go:89] found id: ""
	I0906 20:06:32.435362   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.435370   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:32.435375   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:32.435427   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:32.472849   73230 cri.go:89] found id: ""
	I0906 20:06:32.472900   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.472909   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:32.472914   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:32.472964   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:32.508176   73230 cri.go:89] found id: ""
	I0906 20:06:32.508199   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.508208   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:32.508213   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:32.508271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:32.550519   73230 cri.go:89] found id: ""
	I0906 20:06:32.550550   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.550561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:32.550576   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:32.550593   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:32.601362   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:32.601394   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:32.614821   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:32.614849   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:32.686044   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:32.686061   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:32.686074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:32.767706   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:32.767744   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:35.309159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:35.322386   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:35.322462   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:35.362909   73230 cri.go:89] found id: ""
	I0906 20:06:35.362937   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.362948   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:35.362955   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:35.363017   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:35.400591   73230 cri.go:89] found id: ""
	I0906 20:06:35.400621   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.400629   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:35.400635   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:35.400682   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:35.436547   73230 cri.go:89] found id: ""
	I0906 20:06:35.436578   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.436589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:35.436596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:35.436666   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:35.473130   73230 cri.go:89] found id: ""
	I0906 20:06:35.473155   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.473163   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:35.473168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:35.473244   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:35.509646   73230 cri.go:89] found id: ""
	I0906 20:06:35.509677   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.509687   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:35.509695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:35.509754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:35.547651   73230 cri.go:89] found id: ""
	I0906 20:06:35.547684   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.547696   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:35.547703   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:35.547761   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:35.608590   73230 cri.go:89] found id: ""
	I0906 20:06:35.608614   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.608624   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:35.608631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:35.608691   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:35.651508   73230 cri.go:89] found id: ""
	I0906 20:06:35.651550   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.651561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:35.651572   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:35.651585   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:35.705502   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:35.705542   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:35.719550   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:35.719577   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:35.791435   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:35.791461   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:35.791476   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:35.869018   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:35.869070   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:38.411587   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:38.425739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:38.425800   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:38.463534   73230 cri.go:89] found id: ""
	I0906 20:06:38.463560   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.463571   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:38.463578   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:38.463628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:38.499238   73230 cri.go:89] found id: ""
	I0906 20:06:38.499269   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.499280   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:38.499287   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:38.499340   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:38.536297   73230 cri.go:89] found id: ""
	I0906 20:06:38.536334   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.536345   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:38.536352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:38.536417   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:38.573672   73230 cri.go:89] found id: ""
	I0906 20:06:38.573701   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.573712   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:38.573720   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:38.573779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:38.610913   73230 cri.go:89] found id: ""
	I0906 20:06:38.610937   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.610945   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:38.610950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:38.610996   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:38.647335   73230 cri.go:89] found id: ""
	I0906 20:06:38.647359   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.647368   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:38.647374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:38.647418   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:38.684054   73230 cri.go:89] found id: ""
	I0906 20:06:38.684084   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.684097   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:38.684106   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:38.684174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:38.731134   73230 cri.go:89] found id: ""
	I0906 20:06:38.731161   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.731173   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:38.731183   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:38.731199   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:38.787757   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:38.787798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:38.802920   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:38.802955   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:38.889219   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:38.889246   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:38.889261   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:38.964999   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:38.965042   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:41.504406   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:41.518111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:41.518169   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:41.558701   73230 cri.go:89] found id: ""
	I0906 20:06:41.558727   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.558738   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:41.558746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:41.558807   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:41.595986   73230 cri.go:89] found id: ""
	I0906 20:06:41.596009   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.596017   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:41.596023   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:41.596070   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:41.631462   73230 cri.go:89] found id: ""
	I0906 20:06:41.631486   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.631494   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:41.631504   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:41.631559   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:41.669646   73230 cri.go:89] found id: ""
	I0906 20:06:41.669674   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.669686   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:41.669693   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:41.669754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:41.708359   73230 cri.go:89] found id: ""
	I0906 20:06:41.708383   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.708391   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:41.708398   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:41.708446   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:41.745712   73230 cri.go:89] found id: ""
	I0906 20:06:41.745737   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.745750   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:41.745756   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:41.745804   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:41.781862   73230 cri.go:89] found id: ""
	I0906 20:06:41.781883   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.781892   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:41.781898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:41.781946   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:41.816687   73230 cri.go:89] found id: ""
	I0906 20:06:41.816714   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.816722   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:41.816730   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:41.816742   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:41.830115   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:41.830145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:41.908303   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:41.908334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:41.908348   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:42.001459   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:42.001501   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:42.061341   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:42.061368   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:44.619574   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:44.633355   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:44.633423   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:44.668802   73230 cri.go:89] found id: ""
	I0906 20:06:44.668834   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.668845   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:44.668852   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:44.668924   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:44.707613   73230 cri.go:89] found id: ""
	I0906 20:06:44.707639   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.707650   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:44.707657   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:44.707727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:44.744202   73230 cri.go:89] found id: ""
	I0906 20:06:44.744231   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.744243   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:44.744250   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:44.744311   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:44.783850   73230 cri.go:89] found id: ""
	I0906 20:06:44.783873   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.783881   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:44.783886   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:44.783938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:44.824986   73230 cri.go:89] found id: ""
	I0906 20:06:44.825011   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.825019   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:44.825025   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:44.825073   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:44.865157   73230 cri.go:89] found id: ""
	I0906 20:06:44.865182   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.865190   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:44.865196   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:44.865258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:44.908268   73230 cri.go:89] found id: ""
	I0906 20:06:44.908295   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.908305   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:44.908312   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:44.908359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:44.948669   73230 cri.go:89] found id: ""
	I0906 20:06:44.948697   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.948706   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:44.948716   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:44.948731   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:44.961862   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:44.961887   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:45.036756   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:45.036783   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:45.036801   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:45.116679   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:45.116717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:45.159756   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:45.159784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:47.714682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:47.730754   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:47.730820   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:47.783208   73230 cri.go:89] found id: ""
	I0906 20:06:47.783239   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.783249   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:47.783255   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:47.783312   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:47.844291   73230 cri.go:89] found id: ""
	I0906 20:06:47.844324   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.844336   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:47.844344   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:47.844407   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:47.881877   73230 cri.go:89] found id: ""
	I0906 20:06:47.881905   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.881913   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:47.881919   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:47.881986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:47.918034   73230 cri.go:89] found id: ""
	I0906 20:06:47.918058   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.918066   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:47.918072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:47.918126   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:47.957045   73230 cri.go:89] found id: ""
	I0906 20:06:47.957068   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.957077   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:47.957083   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:47.957134   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:47.993849   73230 cri.go:89] found id: ""
	I0906 20:06:47.993872   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.993883   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:47.993890   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:47.993951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:48.031214   73230 cri.go:89] found id: ""
	I0906 20:06:48.031239   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.031249   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:48.031257   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:48.031314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:48.064634   73230 cri.go:89] found id: ""
	I0906 20:06:48.064673   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.064690   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:48.064698   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:48.064710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:48.104307   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:48.104343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:48.158869   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:48.158900   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:48.173000   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:48.173026   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:48.248751   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:48.248774   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:48.248792   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:50.833490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:50.847618   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:50.847702   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:50.887141   73230 cri.go:89] found id: ""
	I0906 20:06:50.887167   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.887176   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:50.887181   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:50.887228   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:50.923435   73230 cri.go:89] found id: ""
	I0906 20:06:50.923480   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.923491   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:50.923499   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:50.923567   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:50.959704   73230 cri.go:89] found id: ""
	I0906 20:06:50.959730   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.959742   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:50.959748   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:50.959810   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:50.992994   73230 cri.go:89] found id: ""
	I0906 20:06:50.993023   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.993032   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:50.993037   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:50.993091   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:51.031297   73230 cri.go:89] found id: ""
	I0906 20:06:51.031321   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.031329   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:51.031335   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:51.031390   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:51.067698   73230 cri.go:89] found id: ""
	I0906 20:06:51.067721   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.067732   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:51.067739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:51.067799   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:51.102240   73230 cri.go:89] found id: ""
	I0906 20:06:51.102268   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.102278   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:51.102285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:51.102346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:51.137146   73230 cri.go:89] found id: ""
	I0906 20:06:51.137172   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.137183   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:51.137194   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:51.137209   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:51.216158   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:51.216194   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:51.256063   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:51.256088   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:51.309176   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:51.309210   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:51.323515   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:51.323544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:51.393281   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:53.893714   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:53.907807   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:53.907863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:53.947929   73230 cri.go:89] found id: ""
	I0906 20:06:53.947954   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.947962   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:53.947968   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:53.948014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:53.983005   73230 cri.go:89] found id: ""
	I0906 20:06:53.983028   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.983041   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:53.983046   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:53.983094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:54.019004   73230 cri.go:89] found id: ""
	I0906 20:06:54.019027   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.019035   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:54.019041   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:54.019094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:54.060240   73230 cri.go:89] found id: ""
	I0906 20:06:54.060266   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.060279   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:54.060285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:54.060336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:54.096432   73230 cri.go:89] found id: ""
	I0906 20:06:54.096461   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.096469   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:54.096475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:54.096537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:54.132992   73230 cri.go:89] found id: ""
	I0906 20:06:54.133021   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.133033   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:54.133040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:54.133103   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:54.172730   73230 cri.go:89] found id: ""
	I0906 20:06:54.172754   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.172766   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:54.172778   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:54.172839   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:54.212050   73230 cri.go:89] found id: ""
	I0906 20:06:54.212191   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.212202   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:54.212212   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:54.212234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:54.263603   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:54.263647   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:54.281291   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:54.281324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:54.359523   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:54.359545   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:54.359568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:54.442230   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:54.442265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:56.983744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:56.997451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:56.997527   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:57.034792   73230 cri.go:89] found id: ""
	I0906 20:06:57.034817   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.034825   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:57.034831   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:57.034883   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:57.073709   73230 cri.go:89] found id: ""
	I0906 20:06:57.073735   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.073745   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:57.073751   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:57.073803   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:57.122758   73230 cri.go:89] found id: ""
	I0906 20:06:57.122787   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.122798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:57.122808   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:57.122865   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:57.158208   73230 cri.go:89] found id: ""
	I0906 20:06:57.158242   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.158252   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:57.158262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:57.158323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:57.194004   73230 cri.go:89] found id: ""
	I0906 20:06:57.194029   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.194037   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:57.194044   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:57.194099   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:57.230068   73230 cri.go:89] found id: ""
	I0906 20:06:57.230099   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.230111   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:57.230119   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:57.230186   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:57.265679   73230 cri.go:89] found id: ""
	I0906 20:06:57.265707   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.265718   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:57.265735   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:57.265801   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:57.304917   73230 cri.go:89] found id: ""
	I0906 20:06:57.304946   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.304956   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:57.304967   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:57.304980   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:57.357238   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:57.357276   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:57.371648   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:57.371674   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:57.438572   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:57.438590   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:57.438602   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:57.528212   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:57.528256   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:00.071140   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:00.084975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:00.085055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:00.119680   73230 cri.go:89] found id: ""
	I0906 20:07:00.119713   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.119725   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:00.119732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:00.119786   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:00.155678   73230 cri.go:89] found id: ""
	I0906 20:07:00.155704   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.155716   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:00.155723   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:00.155769   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:00.190758   73230 cri.go:89] found id: ""
	I0906 20:07:00.190783   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.190793   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:00.190799   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:00.190863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:00.228968   73230 cri.go:89] found id: ""
	I0906 20:07:00.228999   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.229010   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:00.229018   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:00.229079   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:00.265691   73230 cri.go:89] found id: ""
	I0906 20:07:00.265722   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.265733   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:00.265741   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:00.265806   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:00.305785   73230 cri.go:89] found id: ""
	I0906 20:07:00.305812   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.305820   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:00.305825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:00.305872   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:00.341872   73230 cri.go:89] found id: ""
	I0906 20:07:00.341895   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.341902   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:00.341907   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:00.341955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:00.377661   73230 cri.go:89] found id: ""
	I0906 20:07:00.377690   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.377702   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:00.377712   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:00.377725   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:00.428215   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:00.428254   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:00.443135   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:00.443165   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:00.518745   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:00.518768   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:00.518781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:00.604413   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:00.604448   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.146657   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:03.160610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:03.160665   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:03.200916   73230 cri.go:89] found id: ""
	I0906 20:07:03.200950   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.200960   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:03.200967   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:03.201029   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:03.239550   73230 cri.go:89] found id: ""
	I0906 20:07:03.239579   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.239592   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:03.239600   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:03.239660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:03.278216   73230 cri.go:89] found id: ""
	I0906 20:07:03.278244   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.278255   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:03.278263   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:03.278325   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:03.315028   73230 cri.go:89] found id: ""
	I0906 20:07:03.315059   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.315073   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:03.315080   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:03.315146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:03.354614   73230 cri.go:89] found id: ""
	I0906 20:07:03.354638   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.354647   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:03.354652   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:03.354710   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:03.390105   73230 cri.go:89] found id: ""
	I0906 20:07:03.390129   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.390138   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:03.390144   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:03.390190   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:03.427651   73230 cri.go:89] found id: ""
	I0906 20:07:03.427679   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.427687   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:03.427695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:03.427763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:03.463191   73230 cri.go:89] found id: ""
	I0906 20:07:03.463220   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.463230   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:03.463242   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:03.463288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:03.476966   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:03.476995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:03.558415   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:03.558441   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:03.558457   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:03.641528   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:03.641564   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.680916   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:03.680943   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:06.235947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:06.249589   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:06.249667   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:06.289193   73230 cri.go:89] found id: ""
	I0906 20:07:06.289223   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.289235   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:06.289242   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:06.289305   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:06.324847   73230 cri.go:89] found id: ""
	I0906 20:07:06.324887   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.324898   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:06.324904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:06.324966   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:06.361755   73230 cri.go:89] found id: ""
	I0906 20:07:06.361786   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.361798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:06.361806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:06.361873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:06.397739   73230 cri.go:89] found id: ""
	I0906 20:07:06.397766   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.397775   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:06.397780   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:06.397833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:06.432614   73230 cri.go:89] found id: ""
	I0906 20:07:06.432641   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.432649   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:06.432655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:06.432703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:06.467784   73230 cri.go:89] found id: ""
	I0906 20:07:06.467812   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.467823   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:06.467830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:06.467890   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:06.507055   73230 cri.go:89] found id: ""
	I0906 20:07:06.507085   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.507096   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:06.507104   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:06.507165   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:06.544688   73230 cri.go:89] found id: ""
	I0906 20:07:06.544720   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.544730   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:06.544740   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:06.544751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:06.597281   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:06.597314   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:06.612749   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:06.612774   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:06.684973   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:06.684993   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:06.685006   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:06.764306   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:06.764345   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.304340   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:09.317460   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:09.317536   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:09.354289   73230 cri.go:89] found id: ""
	I0906 20:07:09.354312   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.354322   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:09.354327   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:09.354373   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:09.390962   73230 cri.go:89] found id: ""
	I0906 20:07:09.390997   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.391008   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:09.391015   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:09.391076   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:09.427456   73230 cri.go:89] found id: ""
	I0906 20:07:09.427491   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.427502   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:09.427510   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:09.427572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:09.462635   73230 cri.go:89] found id: ""
	I0906 20:07:09.462667   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.462680   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:09.462687   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:09.462749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:09.506726   73230 cri.go:89] found id: ""
	I0906 20:07:09.506751   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.506767   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:09.506775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:09.506836   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:09.541974   73230 cri.go:89] found id: ""
	I0906 20:07:09.541999   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.542009   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:09.542017   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:09.542077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:09.580069   73230 cri.go:89] found id: ""
	I0906 20:07:09.580104   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.580115   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:09.580123   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:09.580182   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:09.616025   73230 cri.go:89] found id: ""
	I0906 20:07:09.616054   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.616065   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:09.616075   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:09.616090   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:09.630967   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:09.630993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:09.716733   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:09.716766   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:09.716782   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:09.792471   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:09.792503   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.832326   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:09.832357   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:12.385565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:12.398694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:12.398768   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:12.437446   73230 cri.go:89] found id: ""
	I0906 20:07:12.437473   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.437482   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:12.437487   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:12.437555   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:12.473328   73230 cri.go:89] found id: ""
	I0906 20:07:12.473355   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.473362   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:12.473372   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:12.473429   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:12.510935   73230 cri.go:89] found id: ""
	I0906 20:07:12.510962   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.510972   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:12.510979   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:12.511044   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:12.547961   73230 cri.go:89] found id: ""
	I0906 20:07:12.547991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.547999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:12.548005   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:12.548062   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:12.585257   73230 cri.go:89] found id: ""
	I0906 20:07:12.585291   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.585302   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:12.585309   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:12.585369   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:12.623959   73230 cri.go:89] found id: ""
	I0906 20:07:12.623991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.624003   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:12.624010   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:12.624066   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:12.662795   73230 cri.go:89] found id: ""
	I0906 20:07:12.662822   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.662832   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:12.662840   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:12.662896   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:12.700941   73230 cri.go:89] found id: ""
	I0906 20:07:12.700967   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.700974   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:12.700983   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:12.700994   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:12.785989   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:12.786025   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:12.826678   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:12.826704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:12.881558   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:12.881599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:12.896035   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:12.896065   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:12.970721   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:15.471171   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:15.484466   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:15.484541   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:15.518848   73230 cri.go:89] found id: ""
	I0906 20:07:15.518875   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.518886   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:15.518894   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:15.518953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:15.553444   73230 cri.go:89] found id: ""
	I0906 20:07:15.553468   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.553476   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:15.553482   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:15.553528   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:15.589136   73230 cri.go:89] found id: ""
	I0906 20:07:15.589160   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.589168   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:15.589173   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:15.589220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:15.624410   73230 cri.go:89] found id: ""
	I0906 20:07:15.624434   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.624443   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:15.624449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:15.624492   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:15.661506   73230 cri.go:89] found id: ""
	I0906 20:07:15.661535   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.661547   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:15.661555   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:15.661615   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:15.699126   73230 cri.go:89] found id: ""
	I0906 20:07:15.699148   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.699155   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:15.699161   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:15.699207   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:15.736489   73230 cri.go:89] found id: ""
	I0906 20:07:15.736523   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.736534   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:15.736542   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:15.736604   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:15.771988   73230 cri.go:89] found id: ""
	I0906 20:07:15.772013   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.772020   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:15.772029   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:15.772045   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:15.822734   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:15.822765   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:15.836820   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:15.836872   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:15.915073   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:15.915111   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:15.915126   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:15.988476   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:15.988514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:18.528710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:18.541450   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:18.541526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:18.581278   73230 cri.go:89] found id: ""
	I0906 20:07:18.581308   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.581317   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:18.581323   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:18.581381   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:18.616819   73230 cri.go:89] found id: ""
	I0906 20:07:18.616843   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.616850   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:18.616871   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:18.616923   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:18.655802   73230 cri.go:89] found id: ""
	I0906 20:07:18.655827   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.655842   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:18.655849   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:18.655908   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:18.693655   73230 cri.go:89] found id: ""
	I0906 20:07:18.693679   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.693689   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:18.693696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:18.693779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:18.730882   73230 cri.go:89] found id: ""
	I0906 20:07:18.730914   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.730924   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:18.730931   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:18.730994   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:18.767219   73230 cri.go:89] found id: ""
	I0906 20:07:18.767243   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.767250   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:18.767256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:18.767316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:18.802207   73230 cri.go:89] found id: ""
	I0906 20:07:18.802230   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.802238   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:18.802243   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:18.802300   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:18.840449   73230 cri.go:89] found id: ""
	I0906 20:07:18.840471   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.840481   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:18.840491   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:18.840504   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:18.892430   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:18.892469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:18.906527   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:18.906561   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:18.980462   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:18.980483   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:18.980494   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:19.059550   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:19.059588   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:21.599879   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:21.614131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:21.614205   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:21.650887   73230 cri.go:89] found id: ""
	I0906 20:07:21.650910   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.650919   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:21.650924   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:21.650978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:21.684781   73230 cri.go:89] found id: ""
	I0906 20:07:21.684809   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.684819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:21.684827   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:21.684907   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:21.722685   73230 cri.go:89] found id: ""
	I0906 20:07:21.722711   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.722722   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:21.722729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:21.722791   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:21.757581   73230 cri.go:89] found id: ""
	I0906 20:07:21.757607   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.757616   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:21.757622   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:21.757670   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:21.791984   73230 cri.go:89] found id: ""
	I0906 20:07:21.792008   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.792016   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:21.792022   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:21.792072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:21.853612   73230 cri.go:89] found id: ""
	I0906 20:07:21.853636   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.853644   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:21.853650   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:21.853699   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:21.894184   73230 cri.go:89] found id: ""
	I0906 20:07:21.894232   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.894247   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:21.894256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:21.894318   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:21.930731   73230 cri.go:89] found id: ""
	I0906 20:07:21.930758   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.930768   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:21.930779   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:21.930798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:21.969174   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:21.969207   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:22.017647   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:22.017680   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:22.033810   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:22.033852   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:22.111503   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:22.111530   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:22.111544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:24.696348   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:24.710428   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:24.710506   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:24.747923   73230 cri.go:89] found id: ""
	I0906 20:07:24.747958   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.747969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:24.747977   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:24.748037   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:24.782216   73230 cri.go:89] found id: ""
	I0906 20:07:24.782250   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.782260   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:24.782268   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:24.782329   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:24.822093   73230 cri.go:89] found id: ""
	I0906 20:07:24.822126   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.822137   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:24.822148   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:24.822217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:24.857166   73230 cri.go:89] found id: ""
	I0906 20:07:24.857202   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.857213   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:24.857224   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:24.857314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:24.892575   73230 cri.go:89] found id: ""
	I0906 20:07:24.892610   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.892621   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:24.892629   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:24.892689   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:24.929102   73230 cri.go:89] found id: ""
	I0906 20:07:24.929130   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.929140   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:24.929149   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:24.929206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:24.964224   73230 cri.go:89] found id: ""
	I0906 20:07:24.964257   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.964268   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:24.964276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:24.964337   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:25.000453   73230 cri.go:89] found id: ""
	I0906 20:07:25.000475   73230 logs.go:276] 0 containers: []
	W0906 20:07:25.000485   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:25.000496   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:25.000511   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:25.041824   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:25.041851   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:25.093657   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:25.093692   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:25.107547   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:25.107576   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:25.178732   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:25.178755   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:25.178771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:27.764271   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:27.777315   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:27.777389   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:27.812621   73230 cri.go:89] found id: ""
	I0906 20:07:27.812644   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.812655   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:27.812663   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:27.812718   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:27.853063   73230 cri.go:89] found id: ""
	I0906 20:07:27.853093   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.853104   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:27.853112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:27.853171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:27.894090   73230 cri.go:89] found id: ""
	I0906 20:07:27.894118   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.894130   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:27.894137   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:27.894196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:27.930764   73230 cri.go:89] found id: ""
	I0906 20:07:27.930791   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.930802   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:27.930809   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:27.930870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:27.967011   73230 cri.go:89] found id: ""
	I0906 20:07:27.967036   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.967047   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:27.967053   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:27.967111   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:28.002119   73230 cri.go:89] found id: ""
	I0906 20:07:28.002146   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.002157   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:28.002164   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:28.002226   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:28.043884   73230 cri.go:89] found id: ""
	I0906 20:07:28.043909   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.043917   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:28.043923   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:28.043979   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:28.081510   73230 cri.go:89] found id: ""
	I0906 20:07:28.081538   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.081547   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:28.081557   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:28.081568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:28.159077   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:28.159109   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:28.207489   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:28.207527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:28.267579   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:28.267613   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:28.287496   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:28.287529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:28.376555   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:30.876683   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:30.890344   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:30.890424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:30.930618   73230 cri.go:89] found id: ""
	I0906 20:07:30.930647   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.930658   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:30.930666   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:30.930727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:30.968801   73230 cri.go:89] found id: ""
	I0906 20:07:30.968825   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.968834   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:30.968839   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:30.968911   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:31.006437   73230 cri.go:89] found id: ""
	I0906 20:07:31.006463   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.006472   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:31.006477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:31.006531   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:31.042091   73230 cri.go:89] found id: ""
	I0906 20:07:31.042117   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.042125   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:31.042131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:31.042177   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:31.079244   73230 cri.go:89] found id: ""
	I0906 20:07:31.079271   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.079280   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:31.079286   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:31.079336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:31.116150   73230 cri.go:89] found id: ""
	I0906 20:07:31.116174   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.116182   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:31.116188   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:31.116240   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:31.151853   73230 cri.go:89] found id: ""
	I0906 20:07:31.151877   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.151886   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:31.151892   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:31.151939   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:31.189151   73230 cri.go:89] found id: ""
	I0906 20:07:31.189181   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.189192   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:31.189203   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:31.189218   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:31.234466   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:31.234493   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:31.286254   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:31.286288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:31.300500   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:31.300525   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:31.372968   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:31.372987   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:31.372997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:33.949865   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:33.964791   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:33.964849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:34.027049   73230 cri.go:89] found id: ""
	I0906 20:07:34.027082   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.027094   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:34.027102   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:34.027162   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:34.080188   73230 cri.go:89] found id: ""
	I0906 20:07:34.080218   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.080230   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:34.080237   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:34.080320   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:34.124146   73230 cri.go:89] found id: ""
	I0906 20:07:34.124171   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.124179   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:34.124185   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:34.124230   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:34.161842   73230 cri.go:89] found id: ""
	I0906 20:07:34.161864   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.161872   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:34.161878   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:34.161938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:34.201923   73230 cri.go:89] found id: ""
	I0906 20:07:34.201951   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.201961   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:34.201967   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:34.202032   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:34.246609   73230 cri.go:89] found id: ""
	I0906 20:07:34.246644   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.246656   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:34.246665   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:34.246739   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:34.287616   73230 cri.go:89] found id: ""
	I0906 20:07:34.287646   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.287657   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:34.287663   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:34.287721   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:34.322270   73230 cri.go:89] found id: ""
	I0906 20:07:34.322297   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.322309   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:34.322320   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:34.322334   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:34.378598   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:34.378633   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:34.392748   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:34.392781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:34.468620   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:34.468648   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:34.468663   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:34.548290   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:34.548324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:37.095962   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:37.110374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:37.110459   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:37.146705   73230 cri.go:89] found id: ""
	I0906 20:07:37.146732   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.146740   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:37.146746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:37.146802   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:37.185421   73230 cri.go:89] found id: ""
	I0906 20:07:37.185449   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.185461   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:37.185468   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:37.185532   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:37.224767   73230 cri.go:89] found id: ""
	I0906 20:07:37.224793   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.224801   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:37.224806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:37.224884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:37.265392   73230 cri.go:89] found id: ""
	I0906 20:07:37.265422   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.265432   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:37.265438   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:37.265496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:37.302065   73230 cri.go:89] found id: ""
	I0906 20:07:37.302093   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.302101   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:37.302107   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:37.302171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:37.341466   73230 cri.go:89] found id: ""
	I0906 20:07:37.341493   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.341505   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:37.341513   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:37.341576   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.377701   73230 cri.go:89] found id: ""
	I0906 20:07:37.377724   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.377732   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:37.377738   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:37.377798   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:37.412927   73230 cri.go:89] found id: ""
	I0906 20:07:37.412955   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.412966   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:37.412977   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:37.412993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:37.427750   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:37.427776   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:37.500904   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:37.500928   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:37.500945   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:37.583204   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:37.583246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:37.623477   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:37.623512   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.179798   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:40.194295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:40.194372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:40.229731   73230 cri.go:89] found id: ""
	I0906 20:07:40.229768   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.229779   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:40.229787   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:40.229848   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:40.275909   73230 cri.go:89] found id: ""
	I0906 20:07:40.275943   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.275956   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:40.275964   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:40.276049   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:40.316552   73230 cri.go:89] found id: ""
	I0906 20:07:40.316585   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.316594   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:40.316599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:40.316647   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:40.355986   73230 cri.go:89] found id: ""
	I0906 20:07:40.356017   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.356028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:40.356036   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:40.356095   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:40.396486   73230 cri.go:89] found id: ""
	I0906 20:07:40.396522   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.396535   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:40.396544   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:40.396609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:40.440311   73230 cri.go:89] found id: ""
	I0906 20:07:40.440338   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.440346   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:40.440352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:40.440414   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:40.476753   73230 cri.go:89] found id: ""
	I0906 20:07:40.476781   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.476790   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:40.476797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:40.476844   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:40.514462   73230 cri.go:89] found id: ""
	I0906 20:07:40.514489   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.514500   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:40.514511   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:40.514527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:40.553670   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:40.553700   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.608304   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:40.608343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:40.622486   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:40.622514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:40.699408   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:40.699434   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:40.699451   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.278892   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:43.292455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:43.292526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:43.328900   73230 cri.go:89] found id: ""
	I0906 20:07:43.328929   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.328940   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:43.328948   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:43.329009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:43.366728   73230 cri.go:89] found id: ""
	I0906 20:07:43.366754   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.366762   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:43.366768   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:43.366817   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:43.401566   73230 cri.go:89] found id: ""
	I0906 20:07:43.401590   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.401599   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:43.401604   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:43.401650   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:43.437022   73230 cri.go:89] found id: ""
	I0906 20:07:43.437051   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.437063   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:43.437072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:43.437140   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:43.473313   73230 cri.go:89] found id: ""
	I0906 20:07:43.473342   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.473354   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:43.473360   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:43.473420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:43.513590   73230 cri.go:89] found id: ""
	I0906 20:07:43.513616   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.513624   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:43.513630   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:43.513690   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:43.549974   73230 cri.go:89] found id: ""
	I0906 20:07:43.550011   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.550025   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:43.550032   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:43.550100   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:43.592386   73230 cri.go:89] found id: ""
	I0906 20:07:43.592426   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.592444   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:43.592454   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:43.592482   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:43.607804   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:43.607841   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:43.679533   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:43.679568   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:43.679580   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.762111   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:43.762145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:43.802883   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:43.802908   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:46.358429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:46.371252   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:46.371326   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:46.406397   73230 cri.go:89] found id: ""
	I0906 20:07:46.406420   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.406430   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:46.406437   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:46.406496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:46.452186   73230 cri.go:89] found id: ""
	I0906 20:07:46.452209   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.452218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:46.452223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:46.452288   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:46.489418   73230 cri.go:89] found id: ""
	I0906 20:07:46.489443   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.489454   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:46.489461   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:46.489523   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:46.529650   73230 cri.go:89] found id: ""
	I0906 20:07:46.529679   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.529690   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:46.529698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:46.529760   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:46.566429   73230 cri.go:89] found id: ""
	I0906 20:07:46.566454   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.566466   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:46.566474   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:46.566539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:46.604999   73230 cri.go:89] found id: ""
	I0906 20:07:46.605026   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.605034   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:46.605040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:46.605085   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:46.643116   73230 cri.go:89] found id: ""
	I0906 20:07:46.643144   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.643155   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:46.643162   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:46.643222   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:46.679734   73230 cri.go:89] found id: ""
	I0906 20:07:46.679756   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.679764   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:46.679772   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:46.679784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:46.736380   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:46.736430   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:46.750649   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:46.750681   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:46.833098   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:46.833130   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:46.833146   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:46.912223   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:46.912267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.453662   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:49.466520   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:49.466585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:49.508009   73230 cri.go:89] found id: ""
	I0906 20:07:49.508038   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.508049   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:49.508056   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:49.508119   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:49.545875   73230 cri.go:89] found id: ""
	I0906 20:07:49.545900   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.545911   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:49.545918   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:49.545978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:49.584899   73230 cri.go:89] found id: ""
	I0906 20:07:49.584926   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.584933   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:49.584940   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:49.585001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:49.621044   73230 cri.go:89] found id: ""
	I0906 20:07:49.621073   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.621085   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:49.621092   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:49.621146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:49.657074   73230 cri.go:89] found id: ""
	I0906 20:07:49.657099   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.657108   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:49.657115   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:49.657174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:49.693734   73230 cri.go:89] found id: ""
	I0906 20:07:49.693759   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.693767   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:49.693773   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:49.693827   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:49.729920   73230 cri.go:89] found id: ""
	I0906 20:07:49.729950   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.729960   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:49.729965   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:49.730014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:49.765282   73230 cri.go:89] found id: ""
	I0906 20:07:49.765313   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.765324   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:49.765335   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:49.765350   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:49.842509   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:49.842531   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:49.842543   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:49.920670   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:49.920704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.961193   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:49.961220   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:50.014331   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:50.014366   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:52.529758   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:52.543533   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:52.543596   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:52.582802   73230 cri.go:89] found id: ""
	I0906 20:07:52.582826   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.582838   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:52.582845   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:52.582909   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:52.625254   73230 cri.go:89] found id: ""
	I0906 20:07:52.625287   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.625308   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:52.625317   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:52.625383   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:52.660598   73230 cri.go:89] found id: ""
	I0906 20:07:52.660621   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.660632   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:52.660640   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:52.660703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:52.702980   73230 cri.go:89] found id: ""
	I0906 20:07:52.703004   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.703014   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:52.703021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:52.703082   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:52.740361   73230 cri.go:89] found id: ""
	I0906 20:07:52.740387   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.740394   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:52.740400   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:52.740447   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:52.780011   73230 cri.go:89] found id: ""
	I0906 20:07:52.780043   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.780056   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:52.780063   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:52.780123   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:52.825546   73230 cri.go:89] found id: ""
	I0906 20:07:52.825583   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.825595   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:52.825602   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:52.825659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:52.864347   73230 cri.go:89] found id: ""
	I0906 20:07:52.864381   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.864393   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:52.864403   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:52.864417   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:52.943041   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:52.943077   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:52.986158   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:52.986185   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:53.039596   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:53.039635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:53.054265   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:53.054295   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:53.125160   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:55.626058   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:55.639631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:55.639705   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:55.677283   73230 cri.go:89] found id: ""
	I0906 20:07:55.677304   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.677312   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:55.677317   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:55.677372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:55.714371   73230 cri.go:89] found id: ""
	I0906 20:07:55.714402   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.714414   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:55.714422   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:55.714509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:55.753449   73230 cri.go:89] found id: ""
	I0906 20:07:55.753487   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.753500   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:55.753507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:55.753575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:55.792955   73230 cri.go:89] found id: ""
	I0906 20:07:55.792987   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.792999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:55.793006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:55.793074   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:55.827960   73230 cri.go:89] found id: ""
	I0906 20:07:55.827985   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.827996   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:55.828003   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:55.828052   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:55.867742   73230 cri.go:89] found id: ""
	I0906 20:07:55.867765   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.867778   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:55.867785   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:55.867849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:55.907328   73230 cri.go:89] found id: ""
	I0906 20:07:55.907352   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.907359   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:55.907365   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:55.907424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:55.946057   73230 cri.go:89] found id: ""
	I0906 20:07:55.946091   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.946099   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:55.946108   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:55.946119   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:56.033579   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:56.033598   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:56.033611   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:56.116337   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:56.116372   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:56.163397   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:56.163428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:56.217189   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:56.217225   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:58.736147   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:58.749729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:58.749833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:58.786375   73230 cri.go:89] found id: ""
	I0906 20:07:58.786399   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.786406   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:58.786412   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:58.786460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:58.825188   73230 cri.go:89] found id: ""
	I0906 20:07:58.825210   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.825218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:58.825223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:58.825271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:58.866734   73230 cri.go:89] found id: ""
	I0906 20:07:58.866756   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.866764   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:58.866769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:58.866823   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:58.909742   73230 cri.go:89] found id: ""
	I0906 20:07:58.909774   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.909785   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:58.909793   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:58.909850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:58.950410   73230 cri.go:89] found id: ""
	I0906 20:07:58.950438   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.950447   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:58.950452   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:58.950500   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:58.987431   73230 cri.go:89] found id: ""
	I0906 20:07:58.987454   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.987462   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:58.987468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:58.987518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:59.023432   73230 cri.go:89] found id: ""
	I0906 20:07:59.023462   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.023474   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:59.023482   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:59.023544   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:59.057695   73230 cri.go:89] found id: ""
	I0906 20:07:59.057724   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.057734   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:59.057743   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:59.057755   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:59.109634   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:59.109671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:59.125436   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:59.125479   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:59.202018   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:59.202040   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:59.202054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:59.281418   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:59.281456   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:01.823947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:01.839055   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:01.839115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:01.876178   73230 cri.go:89] found id: ""
	I0906 20:08:01.876206   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.876215   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:01.876220   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:01.876274   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:01.912000   73230 cri.go:89] found id: ""
	I0906 20:08:01.912028   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.912038   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:01.912045   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:01.912107   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:01.948382   73230 cri.go:89] found id: ""
	I0906 20:08:01.948412   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.948420   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:01.948426   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:01.948474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:01.982991   73230 cri.go:89] found id: ""
	I0906 20:08:01.983019   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.983028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:01.983033   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:01.983080   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:02.016050   73230 cri.go:89] found id: ""
	I0906 20:08:02.016076   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.016085   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:02.016091   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:02.016151   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:02.051087   73230 cri.go:89] found id: ""
	I0906 20:08:02.051125   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.051137   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:02.051150   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:02.051214   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:02.093230   73230 cri.go:89] found id: ""
	I0906 20:08:02.093254   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.093263   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:02.093268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:02.093323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:02.130580   73230 cri.go:89] found id: ""
	I0906 20:08:02.130609   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.130619   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:02.130629   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:02.130644   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:02.183192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:02.183231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:02.199079   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:02.199110   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:02.274259   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:02.274279   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:02.274303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:02.356198   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:02.356234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:04.899180   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:04.912879   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:04.912955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:04.950598   73230 cri.go:89] found id: ""
	I0906 20:08:04.950632   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.950642   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:04.950656   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:04.950713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:04.986474   73230 cri.go:89] found id: ""
	I0906 20:08:04.986504   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.986513   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:04.986519   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:04.986570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:05.025837   73230 cri.go:89] found id: ""
	I0906 20:08:05.025868   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.025877   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:05.025884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:05.025934   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:05.063574   73230 cri.go:89] found id: ""
	I0906 20:08:05.063613   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.063622   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:05.063628   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:05.063674   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:05.101341   73230 cri.go:89] found id: ""
	I0906 20:08:05.101371   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.101383   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:05.101390   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:05.101461   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:05.148551   73230 cri.go:89] found id: ""
	I0906 20:08:05.148580   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.148591   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:05.148599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:05.148668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:05.186907   73230 cri.go:89] found id: ""
	I0906 20:08:05.186935   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.186945   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:05.186953   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:05.187019   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:05.226237   73230 cri.go:89] found id: ""
	I0906 20:08:05.226265   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.226275   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:05.226287   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:05.226300   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:05.242892   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:05.242925   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:05.317797   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:05.317824   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:05.317839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:05.400464   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:05.400500   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:05.442632   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:05.442657   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:07.998033   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:08.012363   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:08.012441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:08.048816   73230 cri.go:89] found id: ""
	I0906 20:08:08.048847   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.048876   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:08.048884   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:08.048947   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:08.109623   73230 cri.go:89] found id: ""
	I0906 20:08:08.109650   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.109661   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:08.109668   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:08.109730   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:08.145405   73230 cri.go:89] found id: ""
	I0906 20:08:08.145432   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.145443   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:08.145451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:08.145514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:08.187308   73230 cri.go:89] found id: ""
	I0906 20:08:08.187344   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.187355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:08.187362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:08.187422   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:08.228782   73230 cri.go:89] found id: ""
	I0906 20:08:08.228815   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.228826   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:08.228833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:08.228918   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:08.269237   73230 cri.go:89] found id: ""
	I0906 20:08:08.269266   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.269276   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:08.269285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:08.269351   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:08.305115   73230 cri.go:89] found id: ""
	I0906 20:08:08.305141   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.305149   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:08.305155   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:08.305206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:08.345442   73230 cri.go:89] found id: ""
	I0906 20:08:08.345472   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.345483   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:08.345494   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:08.345510   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:08.396477   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:08.396518   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:08.410978   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:08.411002   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:08.486220   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:08.486247   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:08.486265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:08.574138   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:08.574190   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:11.117545   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:11.131884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:11.131944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:11.169481   73230 cri.go:89] found id: ""
	I0906 20:08:11.169507   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.169518   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:11.169525   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:11.169590   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:11.211068   73230 cri.go:89] found id: ""
	I0906 20:08:11.211092   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.211100   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:11.211105   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:11.211157   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:11.250526   73230 cri.go:89] found id: ""
	I0906 20:08:11.250560   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.250574   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:11.250580   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:11.250627   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:11.289262   73230 cri.go:89] found id: ""
	I0906 20:08:11.289284   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.289292   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:11.289299   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:11.289346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:11.335427   73230 cri.go:89] found id: ""
	I0906 20:08:11.335456   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.335467   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:11.335475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:11.335535   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:11.375481   73230 cri.go:89] found id: ""
	I0906 20:08:11.375509   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.375518   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:11.375524   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:11.375575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:11.416722   73230 cri.go:89] found id: ""
	I0906 20:08:11.416748   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.416758   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:11.416765   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:11.416830   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:11.452986   73230 cri.go:89] found id: ""
	I0906 20:08:11.453019   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.453030   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:11.453042   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:11.453059   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:11.466435   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:11.466461   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:11.545185   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:11.545212   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:11.545231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:11.627390   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:11.627422   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:11.674071   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:11.674098   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.225887   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:14.242121   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:14.242200   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:14.283024   73230 cri.go:89] found id: ""
	I0906 20:08:14.283055   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.283067   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:14.283074   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:14.283135   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:14.325357   73230 cri.go:89] found id: ""
	I0906 20:08:14.325379   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.325387   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:14.325392   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:14.325455   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:14.362435   73230 cri.go:89] found id: ""
	I0906 20:08:14.362459   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.362467   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:14.362473   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:14.362537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:14.398409   73230 cri.go:89] found id: ""
	I0906 20:08:14.398441   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.398450   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:14.398455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:14.398509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:14.434902   73230 cri.go:89] found id: ""
	I0906 20:08:14.434934   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.434943   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:14.434950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:14.435009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:14.476605   73230 cri.go:89] found id: ""
	I0906 20:08:14.476635   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.476647   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:14.476655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:14.476717   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:14.533656   73230 cri.go:89] found id: ""
	I0906 20:08:14.533681   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.533690   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:14.533696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:14.533753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:14.599661   73230 cri.go:89] found id: ""
	I0906 20:08:14.599685   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.599693   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:14.599702   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:14.599715   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.657680   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:14.657712   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:14.671594   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:14.671624   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:14.747945   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:14.747969   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:14.747979   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:14.829021   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:14.829057   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:17.373569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:17.388910   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:17.388987   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:17.428299   73230 cri.go:89] found id: ""
	I0906 20:08:17.428335   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.428347   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:17.428354   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:17.428419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:17.464660   73230 cri.go:89] found id: ""
	I0906 20:08:17.464685   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.464692   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:17.464697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:17.464758   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:17.500018   73230 cri.go:89] found id: ""
	I0906 20:08:17.500047   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.500059   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:17.500067   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:17.500130   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:17.536345   73230 cri.go:89] found id: ""
	I0906 20:08:17.536375   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.536386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:17.536394   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:17.536456   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:17.574668   73230 cri.go:89] found id: ""
	I0906 20:08:17.574696   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.574707   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:17.574715   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:17.574780   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:17.611630   73230 cri.go:89] found id: ""
	I0906 20:08:17.611653   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.611663   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:17.611669   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:17.611713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:17.647610   73230 cri.go:89] found id: ""
	I0906 20:08:17.647639   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.647649   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:17.647657   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:17.647724   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:17.686204   73230 cri.go:89] found id: ""
	I0906 20:08:17.686233   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.686246   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:17.686260   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:17.686273   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:17.702040   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:17.702069   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:17.775033   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:17.775058   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:17.775074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:17.862319   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:17.862359   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:17.905567   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:17.905604   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:20.457191   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:20.471413   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:20.471474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:20.533714   73230 cri.go:89] found id: ""
	I0906 20:08:20.533749   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.533765   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:20.533772   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:20.533833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:20.580779   73230 cri.go:89] found id: ""
	I0906 20:08:20.580811   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.580823   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:20.580830   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:20.580902   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:20.619729   73230 cri.go:89] found id: ""
	I0906 20:08:20.619755   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.619763   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:20.619769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:20.619816   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:20.661573   73230 cri.go:89] found id: ""
	I0906 20:08:20.661599   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.661606   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:20.661612   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:20.661664   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:20.709409   73230 cri.go:89] found id: ""
	I0906 20:08:20.709443   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.709455   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:20.709463   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:20.709515   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:20.746743   73230 cri.go:89] found id: ""
	I0906 20:08:20.746783   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.746808   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:20.746816   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:20.746891   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:20.788129   73230 cri.go:89] found id: ""
	I0906 20:08:20.788155   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.788164   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:20.788170   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:20.788217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:20.825115   73230 cri.go:89] found id: ""
	I0906 20:08:20.825139   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.825147   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:20.825156   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:20.825167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:20.880975   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:20.881013   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:20.895027   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:20.895061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:20.972718   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:20.972739   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:20.972754   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:21.053062   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:21.053096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:23.595439   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:23.612354   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:23.612419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:23.654479   73230 cri.go:89] found id: ""
	I0906 20:08:23.654508   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.654519   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:23.654526   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:23.654591   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:23.690061   73230 cri.go:89] found id: ""
	I0906 20:08:23.690092   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.690103   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:23.690112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:23.690173   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:23.726644   73230 cri.go:89] found id: ""
	I0906 20:08:23.726670   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.726678   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:23.726684   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:23.726744   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:23.763348   73230 cri.go:89] found id: ""
	I0906 20:08:23.763378   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.763386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:23.763391   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:23.763452   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:23.799260   73230 cri.go:89] found id: ""
	I0906 20:08:23.799290   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.799299   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:23.799305   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:23.799359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:23.843438   73230 cri.go:89] found id: ""
	I0906 20:08:23.843470   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.843481   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:23.843489   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:23.843558   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:23.879818   73230 cri.go:89] found id: ""
	I0906 20:08:23.879847   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.879856   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:23.879867   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:23.879933   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:23.916182   73230 cri.go:89] found id: ""
	I0906 20:08:23.916207   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.916220   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:23.916229   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:23.916240   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:23.987003   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:23.987022   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:23.987033   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:24.073644   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:24.073684   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:24.118293   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:24.118328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:24.172541   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:24.172582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:26.687747   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:26.702174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:26.702238   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:26.740064   73230 cri.go:89] found id: ""
	I0906 20:08:26.740093   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.740101   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:26.740108   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:26.740158   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:26.775198   73230 cri.go:89] found id: ""
	I0906 20:08:26.775227   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.775237   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:26.775244   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:26.775303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:26.808850   73230 cri.go:89] found id: ""
	I0906 20:08:26.808892   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.808903   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:26.808915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:26.808974   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:26.842926   73230 cri.go:89] found id: ""
	I0906 20:08:26.842953   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.842964   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:26.842972   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:26.843031   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:26.878621   73230 cri.go:89] found id: ""
	I0906 20:08:26.878649   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.878658   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:26.878664   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:26.878713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:26.921816   73230 cri.go:89] found id: ""
	I0906 20:08:26.921862   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.921875   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:26.921884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:26.921952   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:26.960664   73230 cri.go:89] found id: ""
	I0906 20:08:26.960692   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.960702   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:26.960709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:26.960771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:27.004849   73230 cri.go:89] found id: ""
	I0906 20:08:27.004904   73230 logs.go:276] 0 containers: []
	W0906 20:08:27.004913   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:27.004922   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:27.004934   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:27.056237   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:27.056267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:27.071882   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:27.071904   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:27.143927   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:27.143949   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:27.143961   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:27.223901   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:27.223935   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:29.766615   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:29.780295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:29.780367   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:29.817745   73230 cri.go:89] found id: ""
	I0906 20:08:29.817775   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.817784   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:29.817790   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:29.817852   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:29.855536   73230 cri.go:89] found id: ""
	I0906 20:08:29.855559   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.855567   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:29.855572   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:29.855628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:29.895043   73230 cri.go:89] found id: ""
	I0906 20:08:29.895092   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.895104   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:29.895111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:29.895178   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:29.939225   73230 cri.go:89] found id: ""
	I0906 20:08:29.939248   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.939256   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:29.939262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:29.939331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:29.974166   73230 cri.go:89] found id: ""
	I0906 20:08:29.974190   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.974198   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:29.974203   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:29.974258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:30.009196   73230 cri.go:89] found id: ""
	I0906 20:08:30.009226   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.009237   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:30.009245   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:30.009310   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:30.043939   73230 cri.go:89] found id: ""
	I0906 20:08:30.043962   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.043970   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:30.043976   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:30.044023   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:30.080299   73230 cri.go:89] found id: ""
	I0906 20:08:30.080328   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.080336   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:30.080345   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:30.080356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:30.131034   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:30.131068   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:30.145502   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:30.145536   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:30.219941   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:30.219963   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:30.219978   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:30.307958   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:30.307995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:32.854002   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:32.867937   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:32.867998   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:32.906925   73230 cri.go:89] found id: ""
	I0906 20:08:32.906957   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.906969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:32.906976   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:32.907038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:32.946662   73230 cri.go:89] found id: ""
	I0906 20:08:32.946691   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.946702   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:32.946710   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:32.946771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:32.981908   73230 cri.go:89] found id: ""
	I0906 20:08:32.981936   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.981944   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:32.981950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:32.982001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:33.014902   73230 cri.go:89] found id: ""
	I0906 20:08:33.014930   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.014939   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:33.014945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:33.015055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:33.051265   73230 cri.go:89] found id: ""
	I0906 20:08:33.051290   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.051298   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:33.051310   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:33.051363   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:33.085436   73230 cri.go:89] found id: ""
	I0906 20:08:33.085468   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.085480   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:33.085487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:33.085552   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:33.121483   73230 cri.go:89] found id: ""
	I0906 20:08:33.121509   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.121517   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:33.121523   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:33.121578   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:33.159883   73230 cri.go:89] found id: ""
	I0906 20:08:33.159915   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.159926   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:33.159937   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:33.159953   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:33.174411   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:33.174442   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:33.243656   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:33.243694   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:33.243710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:33.321782   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:33.321823   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:33.363299   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:33.363335   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:35.916159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:35.929190   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:35.929265   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:35.962853   73230 cri.go:89] found id: ""
	I0906 20:08:35.962890   73230 logs.go:276] 0 containers: []
	W0906 20:08:35.962901   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:35.962909   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:35.962969   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:36.000265   73230 cri.go:89] found id: ""
	I0906 20:08:36.000309   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.000318   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:36.000324   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:36.000374   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:36.042751   73230 cri.go:89] found id: ""
	I0906 20:08:36.042781   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.042792   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:36.042800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:36.042859   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:36.077922   73230 cri.go:89] found id: ""
	I0906 20:08:36.077957   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.077967   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:36.077975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:36.078038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:36.114890   73230 cri.go:89] found id: ""
	I0906 20:08:36.114926   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.114937   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:36.114945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:36.114997   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:36.148058   73230 cri.go:89] found id: ""
	I0906 20:08:36.148089   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.148101   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:36.148108   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:36.148167   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:36.187334   73230 cri.go:89] found id: ""
	I0906 20:08:36.187361   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.187371   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:36.187379   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:36.187498   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:36.221295   73230 cri.go:89] found id: ""
	I0906 20:08:36.221331   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.221342   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:36.221353   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:36.221367   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:36.273489   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:36.273527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:36.287975   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:36.288005   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:36.366914   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:36.366937   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:36.366950   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:36.446582   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:36.446619   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.987075   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:39.001051   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:39.001113   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:39.038064   73230 cri.go:89] found id: ""
	I0906 20:08:39.038093   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.038103   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:39.038110   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:39.038175   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:39.075759   73230 cri.go:89] found id: ""
	I0906 20:08:39.075788   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.075799   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:39.075805   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:39.075866   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:39.113292   73230 cri.go:89] found id: ""
	I0906 20:08:39.113320   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.113331   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:39.113339   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:39.113404   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:39.157236   73230 cri.go:89] found id: ""
	I0906 20:08:39.157269   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.157281   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:39.157289   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:39.157362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:39.195683   73230 cri.go:89] found id: ""
	I0906 20:08:39.195704   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.195712   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:39.195717   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:39.195763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:39.234865   73230 cri.go:89] found id: ""
	I0906 20:08:39.234894   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.234903   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:39.234909   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:39.234961   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:39.269946   73230 cri.go:89] found id: ""
	I0906 20:08:39.269975   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.269983   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:39.269989   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:39.270034   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:39.306184   73230 cri.go:89] found id: ""
	I0906 20:08:39.306214   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.306225   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:39.306235   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:39.306249   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:39.357887   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:39.357920   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:39.371736   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:39.371767   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:39.445674   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:39.445695   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:39.445708   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:39.525283   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:39.525316   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:42.069066   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:42.083229   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:42.083313   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:42.124243   73230 cri.go:89] found id: ""
	I0906 20:08:42.124267   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.124275   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:42.124280   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:42.124330   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:42.162070   73230 cri.go:89] found id: ""
	I0906 20:08:42.162102   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.162113   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:42.162120   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:42.162183   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:42.199161   73230 cri.go:89] found id: ""
	I0906 20:08:42.199191   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.199201   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:42.199208   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:42.199266   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:42.236956   73230 cri.go:89] found id: ""
	I0906 20:08:42.236980   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.236991   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:42.236996   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:42.237068   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:42.272299   73230 cri.go:89] found id: ""
	I0906 20:08:42.272328   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.272336   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:42.272341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:42.272400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:42.310280   73230 cri.go:89] found id: ""
	I0906 20:08:42.310304   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.310312   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:42.310317   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:42.310362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:42.345850   73230 cri.go:89] found id: ""
	I0906 20:08:42.345873   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.345881   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:42.345887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:42.345937   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:42.380785   73230 cri.go:89] found id: ""
	I0906 20:08:42.380812   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.380820   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:42.380830   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:42.380843   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.435803   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:42.435839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:42.450469   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:42.450498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:42.521565   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:42.521587   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:42.521599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:42.595473   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:42.595508   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:45.136985   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:45.150468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:45.150540   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:45.186411   73230 cri.go:89] found id: ""
	I0906 20:08:45.186440   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.186448   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:45.186454   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:45.186521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:45.224463   73230 cri.go:89] found id: ""
	I0906 20:08:45.224495   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.224506   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:45.224513   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:45.224568   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:45.262259   73230 cri.go:89] found id: ""
	I0906 20:08:45.262286   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.262295   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:45.262301   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:45.262357   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:45.299463   73230 cri.go:89] found id: ""
	I0906 20:08:45.299492   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.299501   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:45.299507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:45.299561   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:45.336125   73230 cri.go:89] found id: ""
	I0906 20:08:45.336153   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.336162   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:45.336168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:45.336216   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:45.370397   73230 cri.go:89] found id: ""
	I0906 20:08:45.370427   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.370439   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:45.370448   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:45.370518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:45.406290   73230 cri.go:89] found id: ""
	I0906 20:08:45.406322   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.406333   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:45.406341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:45.406402   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:45.441560   73230 cri.go:89] found id: ""
	I0906 20:08:45.441592   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.441603   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:45.441614   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:45.441627   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:45.508769   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:45.508811   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:45.523659   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:45.523696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:45.595544   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:45.595567   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:45.595582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:45.676060   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:45.676096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:48.216490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:48.230021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:48.230093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:48.267400   73230 cri.go:89] found id: ""
	I0906 20:08:48.267433   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.267444   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:48.267451   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:48.267519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:48.314694   73230 cri.go:89] found id: ""
	I0906 20:08:48.314722   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.314731   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:48.314739   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:48.314805   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:48.358861   73230 cri.go:89] found id: ""
	I0906 20:08:48.358895   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.358906   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:48.358915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:48.358990   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:48.398374   73230 cri.go:89] found id: ""
	I0906 20:08:48.398400   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.398410   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:48.398416   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:48.398488   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:48.438009   73230 cri.go:89] found id: ""
	I0906 20:08:48.438039   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.438050   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:48.438058   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:48.438115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:48.475970   73230 cri.go:89] found id: ""
	I0906 20:08:48.475998   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.476007   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:48.476013   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:48.476071   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:48.512191   73230 cri.go:89] found id: ""
	I0906 20:08:48.512220   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.512230   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:48.512237   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:48.512299   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:48.547820   73230 cri.go:89] found id: ""
	I0906 20:08:48.547850   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.547861   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:48.547872   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:48.547886   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:48.616962   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:48.616997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:48.631969   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:48.631998   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:48.717025   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:48.717043   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:48.717054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:48.796131   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:48.796167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:51.342030   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:51.355761   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:51.355845   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:51.395241   73230 cri.go:89] found id: ""
	I0906 20:08:51.395272   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.395283   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:51.395290   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:51.395350   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:51.433860   73230 cri.go:89] found id: ""
	I0906 20:08:51.433888   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.433897   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:51.433904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:51.433968   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:51.475568   73230 cri.go:89] found id: ""
	I0906 20:08:51.475598   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.475608   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:51.475615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:51.475678   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:51.512305   73230 cri.go:89] found id: ""
	I0906 20:08:51.512329   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.512337   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:51.512342   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:51.512391   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:51.545796   73230 cri.go:89] found id: ""
	I0906 20:08:51.545819   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.545827   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:51.545833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:51.545884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:51.578506   73230 cri.go:89] found id: ""
	I0906 20:08:51.578531   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.578539   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:51.578545   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:51.578609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:51.616571   73230 cri.go:89] found id: ""
	I0906 20:08:51.616596   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.616609   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:51.616615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:51.616660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:51.651542   73230 cri.go:89] found id: ""
	I0906 20:08:51.651566   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.651580   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:51.651588   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:51.651599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:51.705160   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:51.705193   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:51.719450   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:51.719477   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:51.789775   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:51.789796   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:51.789809   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:51.870123   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:51.870158   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.411818   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:54.425759   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:54.425818   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:54.467920   73230 cri.go:89] found id: ""
	I0906 20:08:54.467943   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.467951   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:54.467956   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:54.468008   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:54.508324   73230 cri.go:89] found id: ""
	I0906 20:08:54.508349   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.508357   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:54.508363   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:54.508410   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:54.544753   73230 cri.go:89] found id: ""
	I0906 20:08:54.544780   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.544790   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:54.544797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:54.544884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:54.581407   73230 cri.go:89] found id: ""
	I0906 20:08:54.581436   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.581446   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:54.581453   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:54.581514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:54.618955   73230 cri.go:89] found id: ""
	I0906 20:08:54.618986   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.618998   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:54.619006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:54.619065   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:54.656197   73230 cri.go:89] found id: ""
	I0906 20:08:54.656229   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.656248   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:54.656255   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:54.656316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:54.697499   73230 cri.go:89] found id: ""
	I0906 20:08:54.697536   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.697544   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:54.697549   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:54.697600   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:54.734284   73230 cri.go:89] found id: ""
	I0906 20:08:54.734313   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.734331   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:54.734342   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:54.734356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:54.811079   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:54.811100   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:54.811111   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:54.887309   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:54.887346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.930465   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:54.930499   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:55.000240   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:55.000303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:57.530956   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:57.544056   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:57.544136   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:57.584492   73230 cri.go:89] found id: ""
	I0906 20:08:57.584519   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.584528   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:57.584534   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:57.584585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:57.620220   73230 cri.go:89] found id: ""
	I0906 20:08:57.620250   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.620259   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:57.620265   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:57.620321   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:57.655245   73230 cri.go:89] found id: ""
	I0906 20:08:57.655268   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.655283   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:57.655288   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:57.655346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:57.690439   73230 cri.go:89] found id: ""
	I0906 20:08:57.690470   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.690481   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:57.690487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:57.690551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:57.728179   73230 cri.go:89] found id: ""
	I0906 20:08:57.728206   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.728214   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:57.728221   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:57.728270   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:57.763723   73230 cri.go:89] found id: ""
	I0906 20:08:57.763752   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.763761   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:57.763767   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:57.763825   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:57.799836   73230 cri.go:89] found id: ""
	I0906 20:08:57.799861   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.799869   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:57.799876   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:57.799922   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:57.834618   73230 cri.go:89] found id: ""
	I0906 20:08:57.834644   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.834651   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:57.834660   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:57.834671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:57.887297   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:57.887331   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:57.901690   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:57.901717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:57.969179   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:57.969209   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:57.969223   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:58.052527   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:58.052642   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:09:00.593665   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:00.608325   73230 kubeadm.go:597] duration metric: took 4m4.153407014s to restartPrimaryControlPlane
	W0906 20:09:00.608399   73230 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:00.608428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:05.878028   73230 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.269561172s)
	I0906 20:09:05.878112   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:05.893351   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:05.904668   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:05.915560   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:05.915583   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:05.915633   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:09:05.926566   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:05.926625   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:05.937104   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:09:05.946406   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:05.946467   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:05.956203   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.965691   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:05.965751   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.976210   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:09:05.986104   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:05.986174   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:05.996282   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:06.068412   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:09:06.068507   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:06.213882   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:06.214044   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:06.214191   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:09:06.406793   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:06.408933   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:06.409043   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:06.409126   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:06.409242   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:06.409351   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:06.409445   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:06.409559   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:06.409666   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:06.409758   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:06.409870   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:06.409964   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:06.410010   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:06.410101   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:06.721268   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:06.888472   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.414908   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.505887   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.525704   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.525835   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.525913   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.699971   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:07.701970   73230 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.702095   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.708470   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.710216   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.711016   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.714706   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:09:47.714239   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:09:47.714464   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:47.714711   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:09:52.715187   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:52.715391   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:02.716155   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:02.716424   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:22.717567   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:22.717827   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:02.719781   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:02.720062   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:02.720077   73230 kubeadm.go:310] 
	I0906 20:11:02.720125   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:11:02.720177   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:11:02.720189   73230 kubeadm.go:310] 
	I0906 20:11:02.720246   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:11:02.720290   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:11:02.720443   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:11:02.720469   73230 kubeadm.go:310] 
	I0906 20:11:02.720593   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:11:02.720665   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:11:02.720722   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:11:02.720746   73230 kubeadm.go:310] 
	I0906 20:11:02.720900   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:11:02.721018   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:11:02.721028   73230 kubeadm.go:310] 
	I0906 20:11:02.721180   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:11:02.721311   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:11:02.721405   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:11:02.721500   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:11:02.721512   73230 kubeadm.go:310] 
	I0906 20:11:02.722088   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:11:02.722199   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:11:02.722310   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0906 20:11:02.722419   73230 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 20:11:02.722469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:11:03.188091   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:11:03.204943   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:11:03.215434   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:11:03.215458   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:11:03.215506   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:11:03.225650   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:11:03.225713   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:11:03.236252   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:11:03.245425   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:11:03.245489   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:11:03.255564   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.264932   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:11:03.265014   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.274896   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:11:03.284027   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:11:03.284092   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:11:03.294368   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:11:03.377411   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:11:03.377509   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:11:03.537331   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:11:03.537590   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:11:03.537722   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:11:03.728458   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:11:03.730508   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:11:03.730621   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:11:03.730720   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:11:03.730869   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:11:03.730984   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:11:03.731082   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:11:03.731167   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:11:03.731258   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:11:03.731555   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:11:03.731896   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:11:03.732663   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:11:03.732953   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:11:03.733053   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:11:03.839927   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:11:03.988848   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:11:04.077497   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:11:04.213789   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:11:04.236317   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:11:04.237625   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:11:04.237719   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:11:04.399036   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:11:04.400624   73230 out.go:235]   - Booting up control plane ...
	I0906 20:11:04.400709   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:11:04.401417   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:11:04.402751   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:11:04.404122   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:11:04.407817   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:11:44.410273   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:11:44.410884   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:44.411132   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:49.411428   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:49.411674   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:59.412917   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:59.413182   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:19.414487   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:19.414692   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415457   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:59.415729   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415750   73230 kubeadm.go:310] 
	I0906 20:12:59.415808   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:12:59.415864   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:12:59.415874   73230 kubeadm.go:310] 
	I0906 20:12:59.415933   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:12:59.415979   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:12:59.416147   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:12:59.416167   73230 kubeadm.go:310] 
	I0906 20:12:59.416332   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:12:59.416372   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:12:59.416420   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:12:59.416428   73230 kubeadm.go:310] 
	I0906 20:12:59.416542   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:12:59.416650   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:12:59.416659   73230 kubeadm.go:310] 
	I0906 20:12:59.416818   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:12:59.416928   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:12:59.417030   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:12:59.417139   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:12:59.417153   73230 kubeadm.go:310] 
	I0906 20:12:59.417400   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:12:59.417485   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:12:59.417559   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 20:12:59.417626   73230 kubeadm.go:394] duration metric: took 8m3.018298427s to StartCluster
	I0906 20:12:59.417673   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:12:59.417741   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:12:59.464005   73230 cri.go:89] found id: ""
	I0906 20:12:59.464033   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.464040   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:12:59.464045   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:12:59.464101   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:12:59.504218   73230 cri.go:89] found id: ""
	I0906 20:12:59.504252   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.504264   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:12:59.504271   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:12:59.504327   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:12:59.541552   73230 cri.go:89] found id: ""
	I0906 20:12:59.541579   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.541589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:12:59.541596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:12:59.541663   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:12:59.580135   73230 cri.go:89] found id: ""
	I0906 20:12:59.580158   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.580168   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:12:59.580174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:12:59.580220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:12:59.622453   73230 cri.go:89] found id: ""
	I0906 20:12:59.622486   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.622498   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:12:59.622518   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:12:59.622587   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:12:59.661561   73230 cri.go:89] found id: ""
	I0906 20:12:59.661590   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.661601   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:12:59.661608   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:12:59.661668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:12:59.695703   73230 cri.go:89] found id: ""
	I0906 20:12:59.695732   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.695742   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:12:59.695749   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:12:59.695808   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:12:59.739701   73230 cri.go:89] found id: ""
	I0906 20:12:59.739733   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.739744   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:12:59.739756   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:12:59.739771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:12:59.791400   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:12:59.791428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:12:59.851142   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:12:59.851179   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:12:59.867242   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:12:59.867278   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:12:59.941041   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:12:59.941060   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:12:59.941071   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0906 20:13:00.061377   73230 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 20:13:00.061456   73230 out.go:270] * 
	* 
	W0906 20:13:00.061515   73230 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.061532   73230 out.go:270] * 
	* 
	W0906 20:13:00.062343   73230 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:13:00.065723   73230 out.go:201] 
	W0906 20:13:00.066968   73230 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.067028   73230 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 20:13:00.067059   73230 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 20:13:00.068497   73230 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-843298 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 2 (234.164985ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-843298 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-843298 logs -n 25: (1.545582236s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-603826 sudo cat                              | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo find                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo crio                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-603826                                       | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-859361 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | disable-driver-mounts-859361                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:57 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-504385             | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-458066            | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653828  | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC | 06 Sep 24 19:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC |                     |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-504385                  | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-458066                 | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-843298        | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653828       | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-843298             | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 20:00:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:00:55.455816   73230 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:00:55.455933   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.455943   73230 out.go:358] Setting ErrFile to fd 2...
	I0906 20:00:55.455951   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.456141   73230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:00:55.456685   73230 out.go:352] Setting JSON to false
	I0906 20:00:55.457698   73230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6204,"bootTime":1725646651,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:00:55.457762   73230 start.go:139] virtualization: kvm guest
	I0906 20:00:55.459863   73230 out.go:177] * [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:00:55.461119   73230 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:00:55.461167   73230 notify.go:220] Checking for updates...
	I0906 20:00:55.463398   73230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:00:55.464573   73230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:00:55.465566   73230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:00:55.466605   73230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:00:55.467834   73230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:00:55.469512   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:00:55.470129   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.470183   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.484881   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0906 20:00:55.485238   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.485752   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.485776   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.486108   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.486296   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.488175   73230 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 20:00:55.489359   73230 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:00:55.489671   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.489705   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.504589   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0906 20:00:55.505047   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.505557   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.505581   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.505867   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.506018   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.541116   73230 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 20:00:55.542402   73230 start.go:297] selected driver: kvm2
	I0906 20:00:55.542423   73230 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-8
43298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.542548   73230 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:00:55.543192   73230 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.543257   73230 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:00:55.558465   73230 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:00:55.558833   73230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:00:55.558865   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:00:55.558875   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:00:55.558908   73230 start.go:340] cluster config:
	{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.559011   73230 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.561521   73230 out.go:177] * Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	I0906 20:00:55.309027   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:58.377096   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:55.562714   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:00:55.562760   73230 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:00:55.562773   73230 cache.go:56] Caching tarball of preloaded images
	I0906 20:00:55.562856   73230 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:00:55.562868   73230 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0906 20:00:55.562977   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:00:55.563173   73230 start.go:360] acquireMachinesLock for old-k8s-version-843298: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:01:04.457122   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:07.529093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:13.609120   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:16.681107   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:22.761164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:25.833123   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:31.913167   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:34.985108   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:41.065140   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:44.137176   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:50.217162   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:53.289137   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:59.369093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:02.441171   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:08.521164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:11.593164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:17.673124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:20.745159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:26.825154   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:29.897211   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:35.977181   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:39.049161   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:45.129172   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:48.201208   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:54.281103   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:57.353175   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:03.433105   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:06.505124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:12.585121   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:15.657169   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:21.737151   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:24.809135   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:30.889180   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:33.961145   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:40.041159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:43.113084   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:46.117237   72441 start.go:364] duration metric: took 4m28.485189545s to acquireMachinesLock for "embed-certs-458066"
	I0906 20:03:46.117298   72441 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:03:46.117309   72441 fix.go:54] fixHost starting: 
	I0906 20:03:46.117737   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:03:46.117773   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:03:46.132573   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0906 20:03:46.133029   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:03:46.133712   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:03:46.133743   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:03:46.134097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:03:46.134322   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:03:46.134505   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:03:46.136291   72441 fix.go:112] recreateIfNeeded on embed-certs-458066: state=Stopped err=<nil>
	I0906 20:03:46.136313   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	W0906 20:03:46.136466   72441 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:03:46.138544   72441 out.go:177] * Restarting existing kvm2 VM for "embed-certs-458066" ...
	I0906 20:03:46.139833   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Start
	I0906 20:03:46.140001   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring networks are active...
	I0906 20:03:46.140754   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network default is active
	I0906 20:03:46.141087   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network mk-embed-certs-458066 is active
	I0906 20:03:46.141402   72441 main.go:141] libmachine: (embed-certs-458066) Getting domain xml...
	I0906 20:03:46.142202   72441 main.go:141] libmachine: (embed-certs-458066) Creating domain...
	I0906 20:03:47.351460   72441 main.go:141] libmachine: (embed-certs-458066) Waiting to get IP...
	I0906 20:03:47.352248   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.352628   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.352699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.352597   73827 retry.go:31] will retry after 202.870091ms: waiting for machine to come up
	I0906 20:03:46.114675   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:03:46.114711   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115092   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:03:46.115118   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115306   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:03:46.117092   72322 machine.go:96] duration metric: took 4m37.429712277s to provisionDockerMachine
	I0906 20:03:46.117135   72322 fix.go:56] duration metric: took 4m37.451419912s for fixHost
	I0906 20:03:46.117144   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 4m37.45145595s
	W0906 20:03:46.117167   72322 start.go:714] error starting host: provision: host is not running
	W0906 20:03:46.117242   72322 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0906 20:03:46.117252   72322 start.go:729] Will try again in 5 seconds ...
	I0906 20:03:47.557228   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.557656   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.557682   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.557606   73827 retry.go:31] will retry after 357.664781ms: waiting for machine to come up
	I0906 20:03:47.917575   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.918041   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.918068   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.918005   73827 retry.go:31] will retry after 338.480268ms: waiting for machine to come up
	I0906 20:03:48.258631   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.259269   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.259305   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.259229   73827 retry.go:31] will retry after 554.173344ms: waiting for machine to come up
	I0906 20:03:48.814947   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.815491   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.815523   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.815449   73827 retry.go:31] will retry after 601.029419ms: waiting for machine to come up
	I0906 20:03:49.418253   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:49.418596   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:49.418623   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:49.418548   73827 retry.go:31] will retry after 656.451458ms: waiting for machine to come up
	I0906 20:03:50.076488   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:50.076908   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:50.076928   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:50.076875   73827 retry.go:31] will retry after 1.13800205s: waiting for machine to come up
	I0906 20:03:51.216380   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:51.216801   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:51.216831   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:51.216758   73827 retry.go:31] will retry after 1.071685673s: waiting for machine to come up
	I0906 20:03:52.289760   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:52.290174   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:52.290202   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:52.290125   73827 retry.go:31] will retry after 1.581761127s: waiting for machine to come up
	I0906 20:03:51.119269   72322 start.go:360] acquireMachinesLock for no-preload-504385: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:03:53.873755   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:53.874150   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:53.874184   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:53.874120   73827 retry.go:31] will retry after 1.99280278s: waiting for machine to come up
	I0906 20:03:55.869267   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:55.869747   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:55.869776   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:55.869685   73827 retry.go:31] will retry after 2.721589526s: waiting for machine to come up
	I0906 20:03:58.594012   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:58.594402   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:58.594428   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:58.594354   73827 retry.go:31] will retry after 2.763858077s: waiting for machine to come up
	I0906 20:04:01.359424   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:01.359775   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:04:01.359809   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:04:01.359736   73827 retry.go:31] will retry after 3.822567166s: waiting for machine to come up
	I0906 20:04:06.669858   72867 start.go:364] duration metric: took 4m9.363403512s to acquireMachinesLock for "default-k8s-diff-port-653828"
	I0906 20:04:06.669929   72867 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:06.669938   72867 fix.go:54] fixHost starting: 
	I0906 20:04:06.670353   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:06.670393   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:06.688290   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44215
	I0906 20:04:06.688752   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:06.689291   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:04:06.689314   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:06.689692   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:06.689886   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:06.690048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:04:06.691557   72867 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653828: state=Stopped err=<nil>
	I0906 20:04:06.691592   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	W0906 20:04:06.691742   72867 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:06.693924   72867 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653828" ...
	I0906 20:04:06.694965   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Start
	I0906 20:04:06.695148   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring networks are active...
	I0906 20:04:06.695900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network default is active
	I0906 20:04:06.696316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network mk-default-k8s-diff-port-653828 is active
	I0906 20:04:06.696698   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Getting domain xml...
	I0906 20:04:06.697469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Creating domain...
	I0906 20:04:05.186782   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187288   72441 main.go:141] libmachine: (embed-certs-458066) Found IP for machine: 192.168.39.118
	I0906 20:04:05.187301   72441 main.go:141] libmachine: (embed-certs-458066) Reserving static IP address...
	I0906 20:04:05.187340   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has current primary IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187764   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.187784   72441 main.go:141] libmachine: (embed-certs-458066) Reserved static IP address: 192.168.39.118
	I0906 20:04:05.187797   72441 main.go:141] libmachine: (embed-certs-458066) DBG | skip adding static IP to network mk-embed-certs-458066 - found existing host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"}
	I0906 20:04:05.187805   72441 main.go:141] libmachine: (embed-certs-458066) Waiting for SSH to be available...
	I0906 20:04:05.187848   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Getting to WaitForSSH function...
	I0906 20:04:05.190229   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190546   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.190576   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190643   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH client type: external
	I0906 20:04:05.190679   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa (-rw-------)
	I0906 20:04:05.190714   72441 main.go:141] libmachine: (embed-certs-458066) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:05.190727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | About to run SSH command:
	I0906 20:04:05.190761   72441 main.go:141] libmachine: (embed-certs-458066) DBG | exit 0
	I0906 20:04:05.317160   72441 main.go:141] libmachine: (embed-certs-458066) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:05.317483   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetConfigRaw
	I0906 20:04:05.318089   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.320559   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.320944   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.320971   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.321225   72441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/config.json ...
	I0906 20:04:05.321445   72441 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:05.321465   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:05.321720   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.323699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.323972   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.324009   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.324126   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.324303   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324444   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324561   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.324706   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.324940   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.324953   72441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:05.437192   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:05.437217   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437479   72441 buildroot.go:166] provisioning hostname "embed-certs-458066"
	I0906 20:04:05.437495   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437665   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.440334   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440705   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.440733   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440925   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.441100   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441260   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441405   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.441573   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.441733   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.441753   72441 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-458066 && echo "embed-certs-458066" | sudo tee /etc/hostname
	I0906 20:04:05.566958   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-458066
	
	I0906 20:04:05.566986   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.569652   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.569984   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.570014   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.570158   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.570342   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570504   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570648   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.570838   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.571042   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.571060   72441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-458066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-458066/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-458066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:05.689822   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:05.689855   72441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:05.689882   72441 buildroot.go:174] setting up certificates
	I0906 20:04:05.689891   72441 provision.go:84] configureAuth start
	I0906 20:04:05.689899   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.690182   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.692758   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693151   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.693172   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693308   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.695364   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.695754   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695909   72441 provision.go:143] copyHostCerts
	I0906 20:04:05.695957   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:05.695975   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:05.696042   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:05.696123   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:05.696130   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:05.696153   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:05.696248   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:05.696257   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:05.696280   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:05.696329   72441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-458066 san=[127.0.0.1 192.168.39.118 embed-certs-458066 localhost minikube]
	I0906 20:04:06.015593   72441 provision.go:177] copyRemoteCerts
	I0906 20:04:06.015656   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:06.015683   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.018244   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018598   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.018630   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018784   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.018990   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.019169   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.019278   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.110170   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 20:04:06.136341   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:06.161181   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:06.184758   72441 provision.go:87] duration metric: took 494.857261ms to configureAuth
	I0906 20:04:06.184786   72441 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:06.184986   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:06.185049   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.187564   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.187955   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.187978   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.188153   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.188399   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188571   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.188920   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.189070   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.189084   72441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:06.425480   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:06.425518   72441 machine.go:96] duration metric: took 1.104058415s to provisionDockerMachine
	I0906 20:04:06.425535   72441 start.go:293] postStartSetup for "embed-certs-458066" (driver="kvm2")
	I0906 20:04:06.425548   72441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:06.425572   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.425893   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:06.425919   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.428471   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428768   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.428794   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428928   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.429109   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.429283   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.429419   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.515180   72441 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:06.519357   72441 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:06.519390   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:06.519464   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:06.519540   72441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:06.519625   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:06.528542   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:06.552463   72441 start.go:296] duration metric: took 126.912829ms for postStartSetup
	I0906 20:04:06.552514   72441 fix.go:56] duration metric: took 20.435203853s for fixHost
	I0906 20:04:06.552540   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.554994   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555521   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.555556   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555739   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.555937   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556095   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556253   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.556409   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.556600   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.556613   72441 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:06.669696   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653046.632932221
	
	I0906 20:04:06.669720   72441 fix.go:216] guest clock: 1725653046.632932221
	I0906 20:04:06.669730   72441 fix.go:229] Guest: 2024-09-06 20:04:06.632932221 +0000 UTC Remote: 2024-09-06 20:04:06.552518521 +0000 UTC m=+289.061134864 (delta=80.4137ms)
	I0906 20:04:06.669761   72441 fix.go:200] guest clock delta is within tolerance: 80.4137ms
	I0906 20:04:06.669769   72441 start.go:83] releasing machines lock for "embed-certs-458066", held for 20.552490687s
	I0906 20:04:06.669801   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.670060   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:06.673015   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673405   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.673433   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673599   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674041   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674210   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674304   72441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:06.674351   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.674414   72441 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:06.674437   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.676916   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677063   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677314   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677341   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677481   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677513   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677686   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677691   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677864   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677878   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678013   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678025   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.678191   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.758176   72441 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:06.782266   72441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:06.935469   72441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:06.941620   72441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:06.941680   72441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:06.957898   72441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:06.957927   72441 start.go:495] detecting cgroup driver to use...
	I0906 20:04:06.957995   72441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:06.978574   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:06.993967   72441 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:06.994035   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:07.008012   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:07.022073   72441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:07.133622   72441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:07.291402   72441 docker.go:233] disabling docker service ...
	I0906 20:04:07.291478   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:07.306422   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:07.321408   72441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:07.442256   72441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:07.564181   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:07.579777   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:07.599294   72441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:07.599361   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.610457   72441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:07.610555   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.621968   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.633527   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.645048   72441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:07.659044   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.670526   72441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.689465   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.701603   72441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:07.712085   72441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:07.712144   72441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:07.728406   72441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:07.739888   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:07.862385   72441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:07.954721   72441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:07.954792   72441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:07.959478   72441 start.go:563] Will wait 60s for crictl version
	I0906 20:04:07.959545   72441 ssh_runner.go:195] Run: which crictl
	I0906 20:04:07.963893   72441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:08.003841   72441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:08.003917   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.032191   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.063563   72441 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:07.961590   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting to get IP...
	I0906 20:04:07.962441   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962859   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962923   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:07.962841   73982 retry.go:31] will retry after 292.508672ms: waiting for machine to come up
	I0906 20:04:08.257346   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257845   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257867   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.257815   73982 retry.go:31] will retry after 265.967606ms: waiting for machine to come up
	I0906 20:04:08.525352   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525907   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.525834   73982 retry.go:31] will retry after 308.991542ms: waiting for machine to come up
	I0906 20:04:08.836444   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837021   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837053   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.836973   73982 retry.go:31] will retry after 483.982276ms: waiting for machine to come up
	I0906 20:04:09.322661   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323161   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323184   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.323125   73982 retry.go:31] will retry after 574.860867ms: waiting for machine to come up
	I0906 20:04:09.899849   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900228   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.900187   73982 retry.go:31] will retry after 769.142372ms: waiting for machine to come up
	I0906 20:04:10.671316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671796   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671853   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:10.671771   73982 retry.go:31] will retry after 720.232224ms: waiting for machine to come up
	I0906 20:04:11.393120   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393502   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393534   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:11.393447   73982 retry.go:31] will retry after 975.812471ms: waiting for machine to come up
	I0906 20:04:08.064907   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:08.067962   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068410   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:08.068442   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068626   72441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:08.072891   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:08.086275   72441 kubeadm.go:883] updating cluster {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:08.086383   72441 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:08.086423   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:08.123100   72441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:08.123158   72441 ssh_runner.go:195] Run: which lz4
	I0906 20:04:08.127330   72441 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:08.131431   72441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:08.131466   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:09.584066   72441 crio.go:462] duration metric: took 1.456765631s to copy over tarball
	I0906 20:04:09.584131   72441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:11.751911   72441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.167751997s)
	I0906 20:04:11.751949   72441 crio.go:469] duration metric: took 2.167848466s to extract the tarball
	I0906 20:04:11.751959   72441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:11.790385   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:11.831973   72441 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:11.831995   72441 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:11.832003   72441 kubeadm.go:934] updating node { 192.168.39.118 8443 v1.31.0 crio true true} ...
	I0906 20:04:11.832107   72441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-458066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:11.832166   72441 ssh_runner.go:195] Run: crio config
	I0906 20:04:11.881946   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:11.881973   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:11.882000   72441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:11.882028   72441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-458066 NodeName:embed-certs-458066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:11.882186   72441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-458066"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:11.882266   72441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:11.892537   72441 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:11.892617   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:11.902278   72441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0906 20:04:11.920451   72441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:11.938153   72441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0906 20:04:11.957510   72441 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:11.961364   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:11.973944   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:12.109677   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:12.126348   72441 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066 for IP: 192.168.39.118
	I0906 20:04:12.126378   72441 certs.go:194] generating shared ca certs ...
	I0906 20:04:12.126399   72441 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:12.126562   72441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:12.126628   72441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:12.126642   72441 certs.go:256] generating profile certs ...
	I0906 20:04:12.126751   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/client.key
	I0906 20:04:12.126843   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key.c10a03b1
	I0906 20:04:12.126904   72441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key
	I0906 20:04:12.127063   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:12.127111   72441 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:12.127123   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:12.127153   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:12.127189   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:12.127218   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:12.127268   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:12.128117   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:12.185978   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:12.218124   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:12.254546   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:12.290098   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0906 20:04:12.317923   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:12.341186   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:12.363961   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:04:12.388000   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:12.418618   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:12.442213   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:12.465894   72441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:12.482404   72441 ssh_runner.go:195] Run: openssl version
	I0906 20:04:12.488370   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:12.499952   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504565   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504619   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.510625   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:12.522202   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:12.370306   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370743   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370779   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:12.370688   73982 retry.go:31] will retry after 1.559820467s: waiting for machine to come up
	I0906 20:04:13.932455   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933042   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933072   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:13.932985   73982 retry.go:31] will retry after 1.968766852s: waiting for machine to come up
	I0906 20:04:15.903304   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903826   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903855   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:15.903775   73982 retry.go:31] will retry after 2.738478611s: waiting for machine to come up
	I0906 20:04:12.533501   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538229   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538284   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.544065   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:12.555220   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:12.566402   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571038   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571093   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.577057   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:12.588056   72441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:12.592538   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:12.598591   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:12.604398   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:12.610502   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:12.616513   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:12.622859   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:12.628975   72441 kubeadm.go:392] StartCluster: {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:12.629103   72441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:12.629154   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.667699   72441 cri.go:89] found id: ""
	I0906 20:04:12.667764   72441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:12.678070   72441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:12.678092   72441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:12.678148   72441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:12.687906   72441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:12.688889   72441 kubeconfig.go:125] found "embed-certs-458066" server: "https://192.168.39.118:8443"
	I0906 20:04:12.690658   72441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:12.700591   72441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.118
	I0906 20:04:12.700623   72441 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:12.700635   72441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:12.700675   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.741471   72441 cri.go:89] found id: ""
	I0906 20:04:12.741553   72441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:12.757877   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:12.767729   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:12.767748   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:12.767800   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:12.777094   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:12.777157   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:12.786356   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:12.795414   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:12.795470   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:12.804727   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.813481   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:12.813534   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.822844   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:12.831877   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:12.831930   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:12.841082   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:12.850560   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:12.975888   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:13.850754   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.064392   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.140680   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.239317   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:14.239411   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:14.740313   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.240388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.740388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.755429   72441 api_server.go:72] duration metric: took 1.516111342s to wait for apiserver process to appear ...
	I0906 20:04:15.755462   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:15.755483   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.544772   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.544807   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.544824   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.596487   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.596546   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.755752   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.761917   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:18.761946   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.256512   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.265937   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.265973   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.756568   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.763581   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.763606   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:20.256237   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:20.262036   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:04:20.268339   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:20.268364   72441 api_server.go:131] duration metric: took 4.512894792s to wait for apiserver health ...
	I0906 20:04:20.268372   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:20.268378   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:20.270262   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:18.644597   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645056   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645088   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:18.644992   73982 retry.go:31] will retry after 2.982517528s: waiting for machine to come up
	I0906 20:04:21.631028   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631392   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631414   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:21.631367   73982 retry.go:31] will retry after 3.639469531s: waiting for machine to come up
	I0906 20:04:20.271474   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:20.282996   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:20.303957   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:20.315560   72441 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:20.315602   72441 system_pods.go:61] "coredns-6f6b679f8f-v6z7z" [b2c18dba-1210-4e95-a705-95abceca92f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:20.315611   72441 system_pods.go:61] "etcd-embed-certs-458066" [cf60e7c7-1801-42c7-be25-85242c22a5d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:20.315619   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [48c684ec-f93f-49ec-868b-6e7bc20ad506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:20.315625   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [1d55b520-2d8f-4517-a491-8193eaff5d89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:20.315631   72441 system_pods.go:61] "kube-proxy-crvq7" [f0610684-81ee-426a-adc2-aea80faab822] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:20.315639   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [d8744325-58f2-43a8-9a93-516b5a6fb989] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:20.315644   72441 system_pods.go:61] "metrics-server-6867b74b74-gtg94" [600e9c90-20db-407e-b586-fae3809d87b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:20.315649   72441 system_pods.go:61] "storage-provisioner" [1efe7188-2d33-4a29-afbe-823adbef73b3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:20.315657   72441 system_pods.go:74] duration metric: took 11.674655ms to wait for pod list to return data ...
	I0906 20:04:20.315665   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:20.318987   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:20.319012   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:20.319023   72441 node_conditions.go:105] duration metric: took 3.354197ms to run NodePressure ...
	I0906 20:04:20.319038   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:20.600925   72441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607562   72441 kubeadm.go:739] kubelet initialised
	I0906 20:04:20.607590   72441 kubeadm.go:740] duration metric: took 6.637719ms waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607602   72441 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:20.611592   72441 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:26.558023   73230 start.go:364] duration metric: took 3m30.994815351s to acquireMachinesLock for "old-k8s-version-843298"
	I0906 20:04:26.558087   73230 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:26.558096   73230 fix.go:54] fixHost starting: 
	I0906 20:04:26.558491   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:26.558542   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:26.576511   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0906 20:04:26.576933   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:26.577434   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:04:26.577460   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:26.577794   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:26.577968   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:26.578128   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetState
	I0906 20:04:26.579640   73230 fix.go:112] recreateIfNeeded on old-k8s-version-843298: state=Stopped err=<nil>
	I0906 20:04:26.579674   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	W0906 20:04:26.579829   73230 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:26.581843   73230 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-843298" ...
	I0906 20:04:25.275406   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275902   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Found IP for machine: 192.168.50.16
	I0906 20:04:25.275942   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has current primary IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275955   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserving static IP address...
	I0906 20:04:25.276431   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.276463   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserved static IP address: 192.168.50.16
	I0906 20:04:25.276482   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | skip adding static IP to network mk-default-k8s-diff-port-653828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"}
	I0906 20:04:25.276493   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for SSH to be available...
	I0906 20:04:25.276512   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Getting to WaitForSSH function...
	I0906 20:04:25.278727   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279006   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.279037   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279196   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH client type: external
	I0906 20:04:25.279234   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa (-rw-------)
	I0906 20:04:25.279289   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:25.279312   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | About to run SSH command:
	I0906 20:04:25.279330   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | exit 0
	I0906 20:04:25.405134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:25.405524   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetConfigRaw
	I0906 20:04:25.406134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.408667   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409044   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.409074   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409332   72867 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/config.json ...
	I0906 20:04:25.409513   72867 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:25.409530   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:25.409724   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.411737   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412027   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.412060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412171   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.412362   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412489   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412662   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.412802   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.413045   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.413059   72867 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:25.513313   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:25.513343   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513613   72867 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653828"
	I0906 20:04:25.513644   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513851   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.516515   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.516847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.516895   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.517116   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.517300   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517461   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517574   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.517712   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.517891   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.517905   72867 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653828 && echo "default-k8s-diff-port-653828" | sudo tee /etc/hostname
	I0906 20:04:25.637660   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653828
	
	I0906 20:04:25.637691   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.640258   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640600   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.640626   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640811   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.641001   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641177   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641333   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.641524   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.641732   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.641754   72867 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:25.749746   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:25.749773   72867 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:25.749795   72867 buildroot.go:174] setting up certificates
	I0906 20:04:25.749812   72867 provision.go:84] configureAuth start
	I0906 20:04:25.749828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.750111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.752528   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.752893   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.752920   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.753104   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.755350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755642   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.755666   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755808   72867 provision.go:143] copyHostCerts
	I0906 20:04:25.755858   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:25.755875   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:25.755930   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:25.756017   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:25.756024   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:25.756046   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:25.756129   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:25.756137   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:25.756155   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:25.756212   72867 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653828 san=[127.0.0.1 192.168.50.16 default-k8s-diff-port-653828 localhost minikube]
	I0906 20:04:25.934931   72867 provision.go:177] copyRemoteCerts
	I0906 20:04:25.935018   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:25.935060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.937539   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.937899   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.937925   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.938111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.938308   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.938469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.938644   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.019666   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:26.043989   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0906 20:04:26.066845   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:04:26.090526   72867 provision.go:87] duration metric: took 340.698646ms to configureAuth
	I0906 20:04:26.090561   72867 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:26.090786   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:26.090878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.093783   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094167   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.094201   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094503   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.094689   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094850   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094975   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.095130   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.095357   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.095389   72867 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:26.324270   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:26.324301   72867 machine.go:96] duration metric: took 914.775498ms to provisionDockerMachine
	I0906 20:04:26.324315   72867 start.go:293] postStartSetup for "default-k8s-diff-port-653828" (driver="kvm2")
	I0906 20:04:26.324328   72867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:26.324350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.324726   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:26.324759   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.327339   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327718   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.327750   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.328147   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.328309   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.328449   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.408475   72867 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:26.413005   72867 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:26.413033   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:26.413107   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:26.413203   72867 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:26.413320   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:26.422811   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:26.449737   72867 start.go:296] duration metric: took 125.408167ms for postStartSetup
	I0906 20:04:26.449772   72867 fix.go:56] duration metric: took 19.779834553s for fixHost
	I0906 20:04:26.449792   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.452589   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.452990   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.453022   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.453323   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.453529   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453710   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.453966   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.454125   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.454136   72867 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:26.557844   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653066.531604649
	
	I0906 20:04:26.557875   72867 fix.go:216] guest clock: 1725653066.531604649
	I0906 20:04:26.557884   72867 fix.go:229] Guest: 2024-09-06 20:04:26.531604649 +0000 UTC Remote: 2024-09-06 20:04:26.449775454 +0000 UTC m=+269.281822801 (delta=81.829195ms)
	I0906 20:04:26.557904   72867 fix.go:200] guest clock delta is within tolerance: 81.829195ms
	I0906 20:04:26.557909   72867 start.go:83] releasing machines lock for "default-k8s-diff-port-653828", held for 19.888002519s
	I0906 20:04:26.557943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.558256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:26.561285   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561705   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.561732   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562425   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562628   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562732   72867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:26.562782   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.562920   72867 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:26.562950   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.565587   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.565970   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566149   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566331   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.566542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.566605   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566633   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566744   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.566756   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566992   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.567145   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.567302   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.672529   72867 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:26.678762   72867 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:26.825625   72867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:26.832290   72867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:26.832363   72867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:26.848802   72867 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:26.848824   72867 start.go:495] detecting cgroup driver to use...
	I0906 20:04:26.848917   72867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:26.864986   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:26.878760   72867 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:26.878813   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:26.893329   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:26.909090   72867 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:27.025534   72867 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:27.190190   72867 docker.go:233] disabling docker service ...
	I0906 20:04:27.190293   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:22.617468   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:24.618561   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.118448   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.204700   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:27.217880   72867 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:27.346599   72867 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:27.466601   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:27.480785   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:27.501461   72867 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:27.501523   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.511815   72867 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:27.511868   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.521806   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.532236   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.542227   72867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:27.552389   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.563462   72867 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.583365   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.594465   72867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:27.605074   72867 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:27.605140   72867 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:27.618702   72867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:27.630566   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:27.748387   72867 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:27.841568   72867 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:27.841652   72867 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:27.846880   72867 start.go:563] Will wait 60s for crictl version
	I0906 20:04:27.846936   72867 ssh_runner.go:195] Run: which crictl
	I0906 20:04:27.851177   72867 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:27.895225   72867 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:27.895327   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.934388   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.966933   72867 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:26.583194   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .Start
	I0906 20:04:26.583341   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring networks are active...
	I0906 20:04:26.584046   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network default is active
	I0906 20:04:26.584420   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network mk-old-k8s-version-843298 is active
	I0906 20:04:26.584851   73230 main.go:141] libmachine: (old-k8s-version-843298) Getting domain xml...
	I0906 20:04:26.585528   73230 main.go:141] libmachine: (old-k8s-version-843298) Creating domain...
	I0906 20:04:27.874281   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting to get IP...
	I0906 20:04:27.875189   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:27.875762   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:27.875844   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:27.875754   74166 retry.go:31] will retry after 289.364241ms: waiting for machine to come up
	I0906 20:04:28.166932   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.167349   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.167375   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.167303   74166 retry.go:31] will retry after 317.106382ms: waiting for machine to come up
	I0906 20:04:28.485664   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.486147   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.486241   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.486199   74166 retry.go:31] will retry after 401.712201ms: waiting for machine to come up
	I0906 20:04:28.890039   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.890594   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.890621   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.890540   74166 retry.go:31] will retry after 570.418407ms: waiting for machine to come up
	I0906 20:04:29.462983   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:29.463463   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:29.463489   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:29.463428   74166 retry.go:31] will retry after 696.361729ms: waiting for machine to come up
	I0906 20:04:30.161305   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:30.161829   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:30.161876   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:30.161793   74166 retry.go:31] will retry after 896.800385ms: waiting for machine to come up
	I0906 20:04:27.968123   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:27.971448   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.971880   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:27.971904   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.972128   72867 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:27.981160   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:27.994443   72867 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653
828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:27.994575   72867 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:27.994635   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:28.043203   72867 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:28.043285   72867 ssh_runner.go:195] Run: which lz4
	I0906 20:04:28.048798   72867 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:28.053544   72867 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:28.053577   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:29.490070   72867 crio.go:462] duration metric: took 1.441303819s to copy over tarball
	I0906 20:04:29.490142   72867 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:31.649831   72867 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159650072s)
	I0906 20:04:31.649870   72867 crio.go:469] duration metric: took 2.159772826s to extract the tarball
	I0906 20:04:31.649880   72867 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:31.686875   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:31.729557   72867 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:31.729580   72867 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:31.729587   72867 kubeadm.go:934] updating node { 192.168.50.16 8444 v1.31.0 crio true true} ...
	I0906 20:04:31.729698   72867 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:31.729799   72867 ssh_runner.go:195] Run: crio config
	I0906 20:04:31.777272   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:31.777299   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:31.777316   72867 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:31.777336   72867 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.16 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653828 NodeName:default-k8s-diff-port-653828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:31.777509   72867 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.16
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653828"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:31.777577   72867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:31.788008   72867 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:31.788070   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:31.798261   72867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0906 20:04:31.815589   72867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:31.832546   72867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0906 20:04:31.849489   72867 ssh_runner.go:195] Run: grep 192.168.50.16	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:31.853452   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:31.866273   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:31.984175   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:32.001110   72867 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828 for IP: 192.168.50.16
	I0906 20:04:32.001139   72867 certs.go:194] generating shared ca certs ...
	I0906 20:04:32.001160   72867 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:32.001343   72867 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:32.001399   72867 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:32.001413   72867 certs.go:256] generating profile certs ...
	I0906 20:04:32.001509   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/client.key
	I0906 20:04:32.001613   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key.01951d83
	I0906 20:04:32.001665   72867 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key
	I0906 20:04:32.001815   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:32.001866   72867 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:32.001880   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:32.001913   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:32.001933   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:32.001962   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:32.002001   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:32.002812   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:32.037177   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:32.078228   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:32.117445   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:32.153039   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 20:04:32.186458   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:28.120786   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:28.120826   72441 pod_ready.go:82] duration metric: took 7.509209061s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:28.120842   72441 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:30.129518   72441 pod_ready.go:103] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:31.059799   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.060272   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.060294   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.060226   74166 retry.go:31] will retry after 841.627974ms: waiting for machine to come up
	I0906 20:04:31.903823   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.904258   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.904280   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.904238   74166 retry.go:31] will retry after 1.274018797s: waiting for machine to come up
	I0906 20:04:33.179723   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:33.180090   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:33.180133   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:33.180059   74166 retry.go:31] will retry after 1.496142841s: waiting for machine to come up
	I0906 20:04:34.678209   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:34.678697   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:34.678726   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:34.678652   74166 retry.go:31] will retry after 1.795101089s: waiting for machine to come up
	I0906 20:04:32.216815   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:32.245378   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:32.272163   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:32.297017   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:32.321514   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:32.345724   72867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:32.362488   72867 ssh_runner.go:195] Run: openssl version
	I0906 20:04:32.368722   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:32.380099   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384777   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384834   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.392843   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:32.405716   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:32.417043   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422074   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422143   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.427946   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:32.439430   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:32.450466   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455056   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455114   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.460970   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:32.471978   72867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:32.476838   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:32.483008   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:32.489685   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:32.496446   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:32.502841   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:32.509269   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:32.515687   72867 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:32.515791   72867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:32.515853   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.567687   72867 cri.go:89] found id: ""
	I0906 20:04:32.567763   72867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:32.578534   72867 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:32.578552   72867 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:32.578598   72867 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:32.588700   72867 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:32.589697   72867 kubeconfig.go:125] found "default-k8s-diff-port-653828" server: "https://192.168.50.16:8444"
	I0906 20:04:32.591739   72867 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:32.601619   72867 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.16
	I0906 20:04:32.601649   72867 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:32.601659   72867 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:32.601724   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.640989   72867 cri.go:89] found id: ""
	I0906 20:04:32.641056   72867 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:32.659816   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:32.670238   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:32.670274   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:32.670327   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:04:32.679687   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:32.679778   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:32.689024   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:04:32.698403   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:32.698465   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:32.707806   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.717015   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:32.717105   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.726408   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:04:32.735461   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:32.735538   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:32.744701   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:32.754202   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:32.874616   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.759668   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.984693   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.051998   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.155274   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:34.155384   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:34.655749   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.156069   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.656120   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.672043   72867 api_server.go:72] duration metric: took 1.516769391s to wait for apiserver process to appear ...
	I0906 20:04:35.672076   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:35.672099   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:32.628208   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.628235   72441 pod_ready.go:82] duration metric: took 4.507383414s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.628248   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633941   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.633965   72441 pod_ready.go:82] duration metric: took 5.709738ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633975   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639227   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.639249   72441 pod_ready.go:82] duration metric: took 5.26842ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639259   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644664   72441 pod_ready.go:93] pod "kube-proxy-crvq7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.644690   72441 pod_ready.go:82] duration metric: took 5.423551ms for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644701   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650000   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.650022   72441 pod_ready.go:82] duration metric: took 5.312224ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650034   72441 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:34.657709   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:37.157744   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:38.092386   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.092429   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.092448   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.129071   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.129110   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.172277   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.213527   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.213573   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:38.673103   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.677672   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.677704   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.172237   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.179638   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:39.179670   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.672801   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.678523   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:04:39.688760   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:39.688793   72867 api_server.go:131] duration metric: took 4.016709147s to wait for apiserver health ...
	I0906 20:04:39.688804   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:39.688812   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:39.690721   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:36.474937   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:36.475399   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:36.475497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:36.475351   74166 retry.go:31] will retry after 1.918728827s: waiting for machine to come up
	I0906 20:04:38.397024   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:38.397588   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:38.397617   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:38.397534   74166 retry.go:31] will retry after 3.460427722s: waiting for machine to come up
	I0906 20:04:39.692055   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:39.707875   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:39.728797   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:39.740514   72867 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:39.740553   72867 system_pods.go:61] "coredns-6f6b679f8f-mvwth" [53675f76-d849-471c-9cd1-561e2f8e6499] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:39.740562   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [f69c9488-87d4-487e-902b-588182c2e2e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:39.740567   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [d641f983-776e-4102-81a3-ba3cf49911a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:39.740579   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [1b09e88d-b038-42d3-9c36-4eee1eff1c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:39.740585   72867 system_pods.go:61] "kube-proxy-9wlq4" [5254a977-ded3-439d-8db0-cd54ccd96940] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:39.740590   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [f8c16cf5-2c76-428f-83de-e79c49566683] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:39.740594   72867 system_pods.go:61] "metrics-server-6867b74b74-dds56" [6219eb1e-2904-487c-b4ed-d786a0627281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:39.740598   72867 system_pods.go:61] "storage-provisioner" [58dd82cd-e250-4f57-97ad-55408f001cc3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:39.740605   72867 system_pods.go:74] duration metric: took 11.784722ms to wait for pod list to return data ...
	I0906 20:04:39.740614   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:39.745883   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:39.745913   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:39.745923   72867 node_conditions.go:105] duration metric: took 5.304169ms to run NodePressure ...
	I0906 20:04:39.745945   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:40.031444   72867 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036537   72867 kubeadm.go:739] kubelet initialised
	I0906 20:04:40.036556   72867 kubeadm.go:740] duration metric: took 5.087185ms waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036563   72867 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:40.044926   72867 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:42.050947   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:39.657641   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:42.156327   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:41.860109   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:41.860612   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:41.860640   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:41.860560   74166 retry.go:31] will retry after 4.509018672s: waiting for machine to come up
	I0906 20:04:44.051148   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.554068   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:44.157427   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.656559   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:47.793833   72322 start.go:364] duration metric: took 56.674519436s to acquireMachinesLock for "no-preload-504385"
	I0906 20:04:47.793890   72322 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:47.793898   72322 fix.go:54] fixHost starting: 
	I0906 20:04:47.794329   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:47.794363   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:47.812048   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0906 20:04:47.812496   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:47.813081   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:04:47.813109   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:47.813446   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:47.813741   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:04:47.813945   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:04:47.815314   72322 fix.go:112] recreateIfNeeded on no-preload-504385: state=Stopped err=<nil>
	I0906 20:04:47.815338   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	W0906 20:04:47.815507   72322 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:47.817424   72322 out.go:177] * Restarting existing kvm2 VM for "no-preload-504385" ...
	I0906 20:04:47.818600   72322 main.go:141] libmachine: (no-preload-504385) Calling .Start
	I0906 20:04:47.818760   72322 main.go:141] libmachine: (no-preload-504385) Ensuring networks are active...
	I0906 20:04:47.819569   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network default is active
	I0906 20:04:47.819883   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network mk-no-preload-504385 is active
	I0906 20:04:47.820233   72322 main.go:141] libmachine: (no-preload-504385) Getting domain xml...
	I0906 20:04:47.821002   72322 main.go:141] libmachine: (no-preload-504385) Creating domain...
	I0906 20:04:46.374128   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374599   73230 main.go:141] libmachine: (old-k8s-version-843298) Found IP for machine: 192.168.72.30
	I0906 20:04:46.374629   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has current primary IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374642   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserving static IP address...
	I0906 20:04:46.375045   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.375071   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | skip adding static IP to network mk-old-k8s-version-843298 - found existing host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"}
	I0906 20:04:46.375081   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserved static IP address: 192.168.72.30
	I0906 20:04:46.375104   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting for SSH to be available...
	I0906 20:04:46.375119   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Getting to WaitForSSH function...
	I0906 20:04:46.377497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377836   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.377883   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377956   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH client type: external
	I0906 20:04:46.377982   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa (-rw-------)
	I0906 20:04:46.378028   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:46.378044   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | About to run SSH command:
	I0906 20:04:46.378054   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | exit 0
	I0906 20:04:46.505025   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:46.505386   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 20:04:46.506031   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.508401   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.508787   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.508827   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.509092   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:04:46.509321   73230 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:46.509339   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:46.509549   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.511816   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512230   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.512265   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512436   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.512618   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512794   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512932   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.513123   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.513364   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.513378   73230 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:46.629437   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:46.629469   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629712   73230 buildroot.go:166] provisioning hostname "old-k8s-version-843298"
	I0906 20:04:46.629731   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629910   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.632226   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632620   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.632653   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632817   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.633009   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633204   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633364   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.633544   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.633758   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.633779   73230 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-843298 && echo "old-k8s-version-843298" | sudo tee /etc/hostname
	I0906 20:04:46.764241   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-843298
	
	I0906 20:04:46.764271   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.766678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767063   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.767092   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767236   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.767414   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767591   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767740   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.767874   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.768069   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.768088   73230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-843298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-843298/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-843298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:46.890399   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:46.890424   73230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:46.890461   73230 buildroot.go:174] setting up certificates
	I0906 20:04:46.890471   73230 provision.go:84] configureAuth start
	I0906 20:04:46.890479   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.890714   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.893391   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893765   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.893802   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893942   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.896173   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896505   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.896524   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896688   73230 provision.go:143] copyHostCerts
	I0906 20:04:46.896741   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:46.896756   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:46.896814   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:46.896967   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:46.896977   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:46.897008   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:46.897096   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:46.897104   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:46.897133   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:46.897193   73230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-843298 san=[127.0.0.1 192.168.72.30 localhost minikube old-k8s-version-843298]
	I0906 20:04:47.128570   73230 provision.go:177] copyRemoteCerts
	I0906 20:04:47.128627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:47.128653   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.131548   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.131952   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.131981   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.132164   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.132396   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.132571   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.132705   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.223745   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:47.249671   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0906 20:04:47.274918   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:47.300351   73230 provision.go:87] duration metric: took 409.869395ms to configureAuth
	I0906 20:04:47.300376   73230 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:47.300584   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:04:47.300673   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.303255   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303559   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.303581   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303739   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.303943   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304098   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304266   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.304407   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.304623   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.304644   73230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:47.539793   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:47.539824   73230 machine.go:96] duration metric: took 1.030489839s to provisionDockerMachine
	I0906 20:04:47.539836   73230 start.go:293] postStartSetup for "old-k8s-version-843298" (driver="kvm2")
	I0906 20:04:47.539849   73230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:47.539884   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.540193   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:47.540220   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.543190   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543482   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.543506   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543707   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.543938   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.544097   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.544243   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.633100   73230 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:47.637336   73230 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:47.637368   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:47.637459   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:47.637541   73230 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:47.637627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:47.648442   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:47.672907   73230 start.go:296] duration metric: took 133.055727ms for postStartSetup
	I0906 20:04:47.672951   73230 fix.go:56] duration metric: took 21.114855209s for fixHost
	I0906 20:04:47.672978   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.675459   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.675833   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.675863   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.676005   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.676303   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676471   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676661   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.676846   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.677056   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.677070   73230 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:47.793647   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653087.750926682
	
	I0906 20:04:47.793671   73230 fix.go:216] guest clock: 1725653087.750926682
	I0906 20:04:47.793681   73230 fix.go:229] Guest: 2024-09-06 20:04:47.750926682 +0000 UTC Remote: 2024-09-06 20:04:47.67295613 +0000 UTC m=+232.250384025 (delta=77.970552ms)
	I0906 20:04:47.793735   73230 fix.go:200] guest clock delta is within tolerance: 77.970552ms
	I0906 20:04:47.793746   73230 start.go:83] releasing machines lock for "old-k8s-version-843298", held for 21.235682628s
	I0906 20:04:47.793778   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.794059   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:47.796792   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797195   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.797229   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797425   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798019   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798230   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798314   73230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:47.798360   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.798488   73230 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:47.798509   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.801253   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801632   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.801658   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801867   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802060   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802122   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.802152   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.802210   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802318   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802460   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802504   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.802580   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802722   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.886458   73230 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:47.910204   73230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:48.055661   73230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:48.063024   73230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:48.063090   73230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:48.084749   73230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:48.084771   73230 start.go:495] detecting cgroup driver to use...
	I0906 20:04:48.084892   73230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:48.105494   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:48.123487   73230 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:48.123564   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:48.145077   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:48.161336   73230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:48.283568   73230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:48.445075   73230 docker.go:233] disabling docker service ...
	I0906 20:04:48.445146   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:48.461122   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:48.475713   73230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:48.632804   73230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:48.762550   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:48.778737   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:48.798465   73230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:04:48.798549   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.811449   73230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:48.811523   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.824192   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.835598   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.847396   73230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:48.860005   73230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:48.871802   73230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:48.871864   73230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:48.887596   73230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:48.899508   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:49.041924   73230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:49.144785   73230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:49.144885   73230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:49.150404   73230 start.go:563] Will wait 60s for crictl version
	I0906 20:04:49.150461   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:49.154726   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:49.202450   73230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:49.202557   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.235790   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.270094   73230 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0906 20:04:49.271457   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:49.274710   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275114   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:49.275139   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275475   73230 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:49.280437   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:49.293664   73230 kubeadm.go:883] updating cluster {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:49.293793   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:04:49.293842   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:49.348172   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:49.348251   73230 ssh_runner.go:195] Run: which lz4
	I0906 20:04:49.352703   73230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:49.357463   73230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:49.357501   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0906 20:04:49.056116   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:51.553185   72867 pod_ready.go:93] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.553217   72867 pod_ready.go:82] duration metric: took 11.508264695s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.553231   72867 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563758   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.563788   72867 pod_ready.go:82] duration metric: took 10.547437ms for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563802   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570906   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.570940   72867 pod_ready.go:82] duration metric: took 7.128595ms for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570957   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:48.657527   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:50.662561   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:49.146755   72322 main.go:141] libmachine: (no-preload-504385) Waiting to get IP...
	I0906 20:04:49.147780   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.148331   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.148406   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.148309   74322 retry.go:31] will retry after 250.314453ms: waiting for machine to come up
	I0906 20:04:49.399920   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.400386   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.400468   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.400345   74322 retry.go:31] will retry after 247.263156ms: waiting for machine to come up
	I0906 20:04:49.648894   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.649420   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.649445   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.649376   74322 retry.go:31] will retry after 391.564663ms: waiting for machine to come up
	I0906 20:04:50.043107   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.043594   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.043617   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.043548   74322 retry.go:31] will retry after 513.924674ms: waiting for machine to come up
	I0906 20:04:50.559145   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.559637   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.559675   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.559543   74322 retry.go:31] will retry after 551.166456ms: waiting for machine to come up
	I0906 20:04:51.111906   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.112967   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.112999   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.112921   74322 retry.go:31] will retry after 653.982425ms: waiting for machine to come up
	I0906 20:04:51.768950   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.769466   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.769496   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.769419   74322 retry.go:31] will retry after 935.670438ms: waiting for machine to come up
	I0906 20:04:52.706493   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:52.707121   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:52.707152   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:52.707062   74322 retry.go:31] will retry after 1.141487289s: waiting for machine to come up
	I0906 20:04:51.190323   73230 crio.go:462] duration metric: took 1.837657617s to copy over tarball
	I0906 20:04:51.190410   73230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:54.320754   73230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.130319477s)
	I0906 20:04:54.320778   73230 crio.go:469] duration metric: took 3.130424981s to extract the tarball
	I0906 20:04:54.320785   73230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:54.388660   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:54.427475   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:54.427505   73230 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:04:54.427580   73230 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.427594   73230 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.427611   73230 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.427662   73230 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.427691   73230 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.427696   73230 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.427813   73230 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.427672   73230 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0906 20:04:54.429432   73230 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.429443   73230 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.429447   73230 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.429448   73230 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.429475   73230 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.429449   73230 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.429496   73230 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.429589   73230 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 20:04:54.603502   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.607745   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.610516   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.613580   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.616591   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.622381   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 20:04:54.636746   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.690207   73230 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0906 20:04:54.690254   73230 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.690306   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.788758   73230 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0906 20:04:54.788804   73230 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.788876   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.804173   73230 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0906 20:04:54.804228   73230 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.804273   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817005   73230 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0906 20:04:54.817056   73230 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.817074   73230 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0906 20:04:54.817101   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817122   73230 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.817138   73230 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 20:04:54.817167   73230 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 20:04:54.817202   73230 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0906 20:04:54.817213   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817220   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.817227   73230 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.817168   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817253   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817301   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.817333   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902264   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.902422   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902522   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.902569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.902602   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.902654   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:54.902708   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.061686   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.073933   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.085364   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:55.085463   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.085399   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.085610   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:55.085725   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.192872   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:55.196085   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.255204   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.288569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.291461   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0906 20:04:55.291541   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.291559   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0906 20:04:55.291726   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0906 20:04:53.578469   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.578504   72867 pod_ready.go:82] duration metric: took 2.007539423s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.578534   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583560   72867 pod_ready.go:93] pod "kube-proxy-9wlq4" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.583583   72867 pod_ready.go:82] duration metric: took 5.037068ms for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583594   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832422   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:54.832453   72867 pod_ready.go:82] duration metric: took 1.248849975s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832480   72867 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:56.840031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.156842   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:55.236051   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.849822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:53.850213   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:53.850235   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:53.850178   74322 retry.go:31] will retry after 1.858736556s: waiting for machine to come up
	I0906 20:04:55.710052   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:55.710550   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:55.710598   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:55.710496   74322 retry.go:31] will retry after 2.033556628s: waiting for machine to come up
	I0906 20:04:57.745989   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:57.746433   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:57.746459   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:57.746388   74322 retry.go:31] will retry after 1.985648261s: waiting for machine to come up
	I0906 20:04:55.500590   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0906 20:04:55.500702   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 20:04:55.500740   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0906 20:04:55.500824   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0906 20:04:55.500885   73230 cache_images.go:92] duration metric: took 1.07336017s to LoadCachedImages
	W0906 20:04:55.500953   73230 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0906 20:04:55.500969   73230 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.20.0 crio true true} ...
	I0906 20:04:55.501112   73230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-843298 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:55.501192   73230 ssh_runner.go:195] Run: crio config
	I0906 20:04:55.554097   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:04:55.554119   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:55.554135   73230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:55.554154   73230 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-843298 NodeName:old-k8s-version-843298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 20:04:55.554359   73230 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-843298"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:55.554441   73230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0906 20:04:55.565923   73230 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:55.566004   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:55.577366   73230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0906 20:04:55.595470   73230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:55.614641   73230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0906 20:04:55.637739   73230 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:55.642233   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:55.658409   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:55.804327   73230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:55.824288   73230 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298 for IP: 192.168.72.30
	I0906 20:04:55.824308   73230 certs.go:194] generating shared ca certs ...
	I0906 20:04:55.824323   73230 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:55.824479   73230 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:55.824541   73230 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:55.824560   73230 certs.go:256] generating profile certs ...
	I0906 20:04:55.824680   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key
	I0906 20:04:55.824755   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3
	I0906 20:04:55.824799   73230 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key
	I0906 20:04:55.824952   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:55.824995   73230 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:55.825008   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:55.825041   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:55.825072   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:55.825102   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:55.825158   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:55.825878   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:55.868796   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:55.905185   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:55.935398   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:55.973373   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 20:04:56.008496   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:04:56.046017   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:56.080049   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:56.122717   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:56.151287   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:56.184273   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:56.216780   73230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:56.239708   73230 ssh_runner.go:195] Run: openssl version
	I0906 20:04:56.246127   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:56.257597   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262515   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262594   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.269207   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:56.281646   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:56.293773   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299185   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299255   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.305740   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:56.319060   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:56.330840   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336013   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336082   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.342576   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:56.354648   73230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:56.359686   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:56.366321   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:56.372646   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:56.379199   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:56.386208   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:56.392519   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:56.399335   73230 kubeadm.go:392] StartCluster: {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:56.399442   73230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:56.399495   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.441986   73230 cri.go:89] found id: ""
	I0906 20:04:56.442069   73230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:56.454884   73230 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:56.454907   73230 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:56.454977   73230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:56.465647   73230 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:56.466650   73230 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:04:56.467285   73230 kubeconfig.go:62] /home/jenkins/minikube-integration/19576-6021/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-843298" cluster setting kubeconfig missing "old-k8s-version-843298" context setting]
	I0906 20:04:56.468248   73230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:56.565587   73230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:56.576221   73230 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.30
	I0906 20:04:56.576261   73230 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:56.576277   73230 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:56.576342   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.621597   73230 cri.go:89] found id: ""
	I0906 20:04:56.621663   73230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:56.639924   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:56.649964   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:56.649989   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:56.650042   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:56.661290   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:56.661343   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:56.671361   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:56.680865   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:56.680939   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:56.696230   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.706613   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:56.706692   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.719635   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:56.729992   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:56.730045   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:56.740040   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:56.750666   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:56.891897   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.681824   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.972206   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.091751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.206345   73230 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:58.206443   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:58.707412   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.206780   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.707273   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:00.207218   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.340092   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:01.838387   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:57.658033   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:00.157741   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:59.734045   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:59.734565   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:59.734592   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:59.734506   74322 retry.go:31] will retry after 2.767491398s: waiting for machine to come up
	I0906 20:05:02.505314   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:02.505749   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:05:02.505780   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:05:02.505697   74322 retry.go:31] will retry after 3.51382931s: waiting for machine to come up
	I0906 20:05:00.707010   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.206708   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.707125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.207349   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.706670   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.207287   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.706650   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.207125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.707193   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:05.207119   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.838639   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:05.839195   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:02.655906   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:04.656677   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:07.157732   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:06.023595   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024063   72322 main.go:141] libmachine: (no-preload-504385) Found IP for machine: 192.168.61.184
	I0906 20:05:06.024095   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has current primary IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024105   72322 main.go:141] libmachine: (no-preload-504385) Reserving static IP address...
	I0906 20:05:06.024576   72322 main.go:141] libmachine: (no-preload-504385) Reserved static IP address: 192.168.61.184
	I0906 20:05:06.024598   72322 main.go:141] libmachine: (no-preload-504385) Waiting for SSH to be available...
	I0906 20:05:06.024621   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.024643   72322 main.go:141] libmachine: (no-preload-504385) DBG | skip adding static IP to network mk-no-preload-504385 - found existing host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"}
	I0906 20:05:06.024666   72322 main.go:141] libmachine: (no-preload-504385) DBG | Getting to WaitForSSH function...
	I0906 20:05:06.026845   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027166   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.027219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027296   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH client type: external
	I0906 20:05:06.027321   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa (-rw-------)
	I0906 20:05:06.027355   72322 main.go:141] libmachine: (no-preload-504385) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:05:06.027376   72322 main.go:141] libmachine: (no-preload-504385) DBG | About to run SSH command:
	I0906 20:05:06.027403   72322 main.go:141] libmachine: (no-preload-504385) DBG | exit 0
	I0906 20:05:06.148816   72322 main.go:141] libmachine: (no-preload-504385) DBG | SSH cmd err, output: <nil>: 
	I0906 20:05:06.149196   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetConfigRaw
	I0906 20:05:06.149951   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.152588   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.152970   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.153003   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.153238   72322 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/config.json ...
	I0906 20:05:06.153485   72322 machine.go:93] provisionDockerMachine start ...
	I0906 20:05:06.153508   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:06.153714   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.156031   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156394   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.156425   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156562   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.156732   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.156901   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.157051   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.157205   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.157411   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.157425   72322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:05:06.261544   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:05:06.261586   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.261861   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:05:06.261895   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.262063   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.264812   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265192   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.265219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265400   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.265570   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265705   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265856   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.265990   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.266145   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.266157   72322 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-504385 && echo "no-preload-504385" | sudo tee /etc/hostname
	I0906 20:05:06.383428   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-504385
	
	I0906 20:05:06.383456   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.386368   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386722   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.386755   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386968   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.387152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387322   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387439   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.387617   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.387817   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.387840   72322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-504385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-504385/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-504385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:05:06.501805   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:05:06.501836   72322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:05:06.501854   72322 buildroot.go:174] setting up certificates
	I0906 20:05:06.501866   72322 provision.go:84] configureAuth start
	I0906 20:05:06.501873   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.502152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.504721   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505086   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.505115   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505250   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.507420   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507765   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.507795   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507940   72322 provision.go:143] copyHostCerts
	I0906 20:05:06.508008   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:05:06.508031   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:05:06.508087   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:05:06.508175   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:05:06.508183   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:05:06.508208   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:05:06.508297   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:05:06.508307   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:05:06.508338   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:05:06.508406   72322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.no-preload-504385 san=[127.0.0.1 192.168.61.184 localhost minikube no-preload-504385]
	I0906 20:05:06.681719   72322 provision.go:177] copyRemoteCerts
	I0906 20:05:06.681786   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:05:06.681810   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.684460   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684779   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.684822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684962   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.685125   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.685258   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.685368   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:06.767422   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:05:06.794881   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0906 20:05:06.821701   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:05:06.848044   72322 provision.go:87] duration metric: took 346.1664ms to configureAuth
	I0906 20:05:06.848075   72322 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:05:06.848271   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:05:06.848348   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.850743   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851037   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.851064   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851226   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.851395   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851549   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851674   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.851791   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.851993   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.852020   72322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:05:07.074619   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:05:07.074643   72322 machine.go:96] duration metric: took 921.143238ms to provisionDockerMachine
	I0906 20:05:07.074654   72322 start.go:293] postStartSetup for "no-preload-504385" (driver="kvm2")
	I0906 20:05:07.074664   72322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:05:07.074678   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.075017   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:05:07.075042   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.077988   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078268   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.078287   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078449   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.078634   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.078791   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.078946   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.165046   72322 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:05:07.169539   72322 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:05:07.169565   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:05:07.169631   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:05:07.169700   72322 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:05:07.169783   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:05:07.179344   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:07.204213   72322 start.go:296] duration metric: took 129.545341ms for postStartSetup
	I0906 20:05:07.204265   72322 fix.go:56] duration metric: took 19.41036755s for fixHost
	I0906 20:05:07.204287   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.207087   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207473   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.207513   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207695   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.207905   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208090   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208267   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.208436   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:07.208640   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:07.208655   72322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:05:07.314172   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653107.281354639
	
	I0906 20:05:07.314195   72322 fix.go:216] guest clock: 1725653107.281354639
	I0906 20:05:07.314205   72322 fix.go:229] Guest: 2024-09-06 20:05:07.281354639 +0000 UTC Remote: 2024-09-06 20:05:07.204269406 +0000 UTC m=+358.676673749 (delta=77.085233ms)
	I0906 20:05:07.314228   72322 fix.go:200] guest clock delta is within tolerance: 77.085233ms
	I0906 20:05:07.314237   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 19.52037381s
	I0906 20:05:07.314266   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.314552   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:07.317476   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.317839   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.317873   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.318003   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318542   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318716   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318821   72322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:05:07.318876   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.318991   72322 ssh_runner.go:195] Run: cat /version.json
	I0906 20:05:07.319018   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.321880   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322102   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322308   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322340   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322472   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322508   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322550   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322685   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.322713   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322868   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.322875   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.323062   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.323066   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.323221   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.424438   72322 ssh_runner.go:195] Run: systemctl --version
	I0906 20:05:07.430755   72322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:05:07.579436   72322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:05:07.585425   72322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:05:07.585493   72322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:05:07.601437   72322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:05:07.601462   72322 start.go:495] detecting cgroup driver to use...
	I0906 20:05:07.601529   72322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:05:07.620368   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:05:07.634848   72322 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:05:07.634912   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:05:07.648810   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:05:07.664084   72322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:05:07.796601   72322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:05:07.974836   72322 docker.go:233] disabling docker service ...
	I0906 20:05:07.974911   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:05:07.989013   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:05:08.002272   72322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:05:08.121115   72322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:05:08.247908   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:05:08.262855   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:05:08.281662   72322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:05:08.281730   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.292088   72322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:05:08.292165   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.302601   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.313143   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.323852   72322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:05:08.335791   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.347619   72322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.365940   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.376124   72322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:05:08.385677   72322 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:05:08.385743   72322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:05:08.398445   72322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:05:08.408477   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:08.518447   72322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:05:08.613636   72322 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:05:08.613707   72322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:05:08.619050   72322 start.go:563] Will wait 60s for crictl version
	I0906 20:05:08.619134   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:08.622959   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:05:08.668229   72322 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:05:08.668297   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.702416   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.733283   72322 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:05:05.707351   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.206573   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.707452   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.206554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.706854   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.206925   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.707456   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.207200   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.706741   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:10.206605   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.839381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.839918   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.157889   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:11.158761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:08.734700   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:08.737126   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737477   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:08.737504   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737692   72322 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0906 20:05:08.741940   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:08.756235   72322 kubeadm.go:883] updating cluster {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:05:08.756380   72322 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:05:08.756426   72322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:05:08.798359   72322 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:05:08.798388   72322 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:05:08.798484   72322 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.798507   72322 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.798520   72322 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0906 20:05:08.798559   72322 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.798512   72322 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.798571   72322 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.798494   72322 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.798489   72322 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800044   72322 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.800055   72322 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800048   72322 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0906 20:05:08.800067   72322 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.800070   72322 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.800043   72322 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.800046   72322 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.800050   72322 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.960723   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.967887   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.980496   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.988288   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.990844   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.000220   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.031002   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0906 20:05:09.046388   72322 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0906 20:05:09.046430   72322 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.046471   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.079069   72322 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0906 20:05:09.079112   72322 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.079161   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147423   72322 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0906 20:05:09.147470   72322 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.147521   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147529   72322 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0906 20:05:09.147549   72322 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.147584   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153575   72322 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0906 20:05:09.153612   72322 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.153659   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153662   72322 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0906 20:05:09.153697   72322 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.153736   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.272296   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.272317   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.272325   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.272368   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.272398   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.272474   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.397590   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.398793   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.398807   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.398899   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.398912   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.398969   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.515664   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.529550   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.529604   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.529762   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.532314   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.532385   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.603138   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.654698   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0906 20:05:09.654823   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:09.671020   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0906 20:05:09.671069   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0906 20:05:09.671123   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0906 20:05:09.671156   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:09.671128   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.671208   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:09.686883   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0906 20:05:09.687013   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:09.709594   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0906 20:05:09.709706   72322 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0906 20:05:09.709758   72322 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.709858   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0906 20:05:09.709877   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709868   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.709940   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0906 20:05:09.709906   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709994   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0906 20:05:09.709771   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0906 20:05:09.709973   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0906 20:05:09.709721   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:09.714755   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0906 20:05:12.389459   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.679458658s)
	I0906 20:05:12.389498   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0906 20:05:12.389522   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389524   72322 ssh_runner.go:235] Completed: which crictl: (2.679596804s)
	I0906 20:05:12.389573   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389582   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:10.706506   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.207411   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.707316   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.207239   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.706502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.206560   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.706593   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.207192   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.706940   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:15.207250   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.338753   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.339694   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.839193   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:13.656815   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.156988   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.349906   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.960304583s)
	I0906 20:05:14.349962   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960364149s)
	I0906 20:05:14.349988   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:14.350001   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0906 20:05:14.350032   72322 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.350085   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.397740   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:16.430883   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.03310928s)
	I0906 20:05:16.430943   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 20:05:16.430977   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080869318s)
	I0906 20:05:16.431004   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0906 20:05:16.431042   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:16.431042   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:16.431103   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:18.293255   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.862123731s)
	I0906 20:05:18.293274   72322 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.862211647s)
	I0906 20:05:18.293294   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0906 20:05:18.293315   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0906 20:05:18.293324   72322 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:18.293372   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:15.706728   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.207477   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.707337   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.206710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.707209   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.206544   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.707104   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.206752   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.706561   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:20.206507   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.840176   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.339033   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:18.657074   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.157488   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:19.142756   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 20:05:19.142784   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:19.142824   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:20.494611   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351756729s)
	I0906 20:05:20.494642   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0906 20:05:20.494656   72322 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.494706   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.706855   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.206585   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.706948   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.207150   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.706508   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.207459   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.706894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.206643   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.707208   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:25.206797   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.838561   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:25.838697   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:23.656303   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:26.156813   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:24.186953   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.692203906s)
	I0906 20:05:24.186987   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0906 20:05:24.187019   72322 cache_images.go:123] Successfully loaded all cached images
	I0906 20:05:24.187026   72322 cache_images.go:92] duration metric: took 15.388623154s to LoadCachedImages
	I0906 20:05:24.187040   72322 kubeadm.go:934] updating node { 192.168.61.184 8443 v1.31.0 crio true true} ...
	I0906 20:05:24.187169   72322 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-504385 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:05:24.187251   72322 ssh_runner.go:195] Run: crio config
	I0906 20:05:24.236699   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:24.236722   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:24.236746   72322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:05:24.236770   72322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.184 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-504385 NodeName:no-preload-504385 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:05:24.236943   72322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-504385"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:05:24.237005   72322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:05:24.247480   72322 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:05:24.247554   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:05:24.257088   72322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0906 20:05:24.274447   72322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:05:24.292414   72322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0906 20:05:24.310990   72322 ssh_runner.go:195] Run: grep 192.168.61.184	control-plane.minikube.internal$ /etc/hosts
	I0906 20:05:24.315481   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:24.327268   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:24.465318   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:05:24.482195   72322 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385 for IP: 192.168.61.184
	I0906 20:05:24.482216   72322 certs.go:194] generating shared ca certs ...
	I0906 20:05:24.482230   72322 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:05:24.482364   72322 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:05:24.482407   72322 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:05:24.482420   72322 certs.go:256] generating profile certs ...
	I0906 20:05:24.482522   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/client.key
	I0906 20:05:24.482603   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key.9c78613e
	I0906 20:05:24.482664   72322 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key
	I0906 20:05:24.482828   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:05:24.482878   72322 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:05:24.482894   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:05:24.482927   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:05:24.482956   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:05:24.482992   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:05:24.483043   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:24.483686   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:05:24.528742   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:05:24.561921   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:05:24.596162   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:05:24.636490   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 20:05:24.664450   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:05:24.690551   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:05:24.717308   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:05:24.741498   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:05:24.764388   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:05:24.789473   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:05:24.814772   72322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:05:24.833405   72322 ssh_runner.go:195] Run: openssl version
	I0906 20:05:24.841007   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:05:24.852635   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857351   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857404   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.863435   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:05:24.874059   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:05:24.884939   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889474   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889567   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.895161   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:05:24.905629   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:05:24.916101   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920494   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920550   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.925973   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:05:24.937017   72322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:05:24.941834   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:05:24.947779   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:05:24.954042   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:05:24.959977   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:05:24.965500   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:05:24.970996   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:05:24.976532   72322 kubeadm.go:392] StartCluster: {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:05:24.976606   72322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:05:24.976667   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.015556   72322 cri.go:89] found id: ""
	I0906 20:05:25.015653   72322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:05:25.032921   72322 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:05:25.032954   72322 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:05:25.033009   72322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:05:25.044039   72322 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:05:25.045560   72322 kubeconfig.go:125] found "no-preload-504385" server: "https://192.168.61.184:8443"
	I0906 20:05:25.049085   72322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:05:25.059027   72322 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.184
	I0906 20:05:25.059060   72322 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:05:25.059073   72322 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:05:25.059128   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.096382   72322 cri.go:89] found id: ""
	I0906 20:05:25.096446   72322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:05:25.114296   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:05:25.126150   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:05:25.126168   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:05:25.126207   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:05:25.136896   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:05:25.136964   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:05:25.148074   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:05:25.158968   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:05:25.159027   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:05:25.169642   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.179183   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:05:25.179258   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.189449   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:05:25.199237   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:05:25.199286   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:05:25.209663   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:05:25.220511   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:25.336312   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.475543   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.139195419s)
	I0906 20:05:26.475586   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.700018   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.768678   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.901831   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:05:26.901928   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.401987   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.903023   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.957637   72322 api_server.go:72] duration metric: took 1.055807s to wait for apiserver process to appear ...
	I0906 20:05:27.957664   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:05:27.957684   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:27.958196   72322 api_server.go:269] stopped: https://192.168.61.184:8443/healthz: Get "https://192.168.61.184:8443/healthz": dial tcp 192.168.61.184:8443: connect: connection refused
	I0906 20:05:28.458421   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:25.706669   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.206691   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.707336   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.206666   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.706715   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.206488   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.706489   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.207461   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.707293   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:30.206591   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.840001   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:29.840101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.768451   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:05:30.768482   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:05:30.768505   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.868390   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.868430   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:30.958611   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.964946   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.964977   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.458125   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.462130   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.462155   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.958761   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.963320   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.963347   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:32.458596   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:32.464885   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:05:32.474582   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:05:32.474616   72322 api_server.go:131] duration metric: took 4.51694462s to wait for apiserver health ...
	I0906 20:05:32.474627   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:32.474635   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:32.476583   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:05:28.157326   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.657628   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:32.477797   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:05:32.490715   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:05:32.510816   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:05:32.529192   72322 system_pods.go:59] 8 kube-system pods found
	I0906 20:05:32.529236   72322 system_pods.go:61] "coredns-6f6b679f8f-s7tnx" [ce438653-a3b9-4412-8705-7d2db7df5d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:05:32.529254   72322 system_pods.go:61] "etcd-no-preload-504385" [6ec6b2a1-c22a-44b4-b726-808a56f2be2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:05:32.529266   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [5f2baa0b-3cf3-4e0d-984b-80fa19adb3b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:05:32.529275   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [59ffbd51-6a83-43e6-8ef7-bc1cfd80b4d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:05:32.529292   72322 system_pods.go:61] "kube-proxy-dg8sg" [2e0393f3-b9bd-4603-b800-e1a2fdbf71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:05:32.529300   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [52a74c91-a6ec-4d64-8651-e1f87db21b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:05:32.529306   72322 system_pods.go:61] "metrics-server-6867b74b74-nn295" [9d0f51d1-7abf-4f63-bef7-c02f6cd89c5d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:05:32.529313   72322 system_pods.go:61] "storage-provisioner" [69ed0066-2b84-4a4d-91e5-1e25bb3f31eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:05:32.529320   72322 system_pods.go:74] duration metric: took 18.48107ms to wait for pod list to return data ...
	I0906 20:05:32.529333   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:05:32.535331   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:05:32.535363   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:05:32.535376   72322 node_conditions.go:105] duration metric: took 6.037772ms to run NodePressure ...
	I0906 20:05:32.535397   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:32.955327   72322 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962739   72322 kubeadm.go:739] kubelet initialised
	I0906 20:05:32.962767   72322 kubeadm.go:740] duration metric: took 7.415054ms waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962776   72322 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:05:32.980280   72322 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:30.707091   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.207070   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.707224   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.207295   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.707195   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.207373   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.707519   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.207428   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.706808   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:35.207396   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.340006   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.838636   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:36.838703   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:33.155769   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.156761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.994689   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.487610   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.707415   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.206955   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.706868   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.206515   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.706659   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.206735   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.706915   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.207300   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.707211   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:40.207085   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.839362   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:41.338875   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.657190   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.158940   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:39.986557   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.486518   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.706720   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.206896   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.707281   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.206751   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.706754   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.206987   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.707245   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.207502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.707112   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:45.206569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.339353   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.838975   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.657187   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.156196   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:47.157014   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:43.986675   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.986701   72322 pod_ready.go:82] duration metric: took 11.006397745s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.986710   72322 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991650   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.991671   72322 pod_ready.go:82] duration metric: took 4.955425ms for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991680   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997218   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:44.997242   72322 pod_ready.go:82] duration metric: took 1.005553613s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997253   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002155   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.002177   72322 pod_ready.go:82] duration metric: took 4.916677ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002186   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006610   72322 pod_ready.go:93] pod "kube-proxy-dg8sg" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.006631   72322 pod_ready.go:82] duration metric: took 4.439092ms for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006639   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185114   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.185139   72322 pod_ready.go:82] duration metric: took 178.494249ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185149   72322 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:47.191676   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.707450   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.207446   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.707006   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.206484   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.707168   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.207536   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.707554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.206894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.706709   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:50.206799   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.338355   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.839372   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.157301   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.157426   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.193619   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.692286   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.707012   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.206914   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.706917   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.207465   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.706682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.206565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.706757   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.206600   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.706926   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:55.207382   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.338845   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.339570   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:53.656904   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.158806   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:54.191331   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.192498   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.707103   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.206621   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.707156   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.207277   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.706568   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:58.206599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:05:58.206698   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:05:58.245828   73230 cri.go:89] found id: ""
	I0906 20:05:58.245857   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.245868   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:05:58.245875   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:05:58.245938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:05:58.283189   73230 cri.go:89] found id: ""
	I0906 20:05:58.283217   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.283228   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:05:58.283235   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:05:58.283303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:05:58.320834   73230 cri.go:89] found id: ""
	I0906 20:05:58.320868   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.320880   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:05:58.320889   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:05:58.320944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:05:58.356126   73230 cri.go:89] found id: ""
	I0906 20:05:58.356152   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.356162   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:05:58.356169   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:05:58.356227   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:05:58.395951   73230 cri.go:89] found id: ""
	I0906 20:05:58.395977   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.395987   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:05:58.395994   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:05:58.396061   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:05:58.431389   73230 cri.go:89] found id: ""
	I0906 20:05:58.431415   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.431426   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:05:58.431433   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:05:58.431511   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:05:58.466255   73230 cri.go:89] found id: ""
	I0906 20:05:58.466285   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.466294   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:05:58.466300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:05:58.466356   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:05:58.505963   73230 cri.go:89] found id: ""
	I0906 20:05:58.505989   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.505997   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:05:58.506006   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:05:58.506018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:05:58.579027   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:05:58.579061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:05:58.620332   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:05:58.620365   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:05:58.675017   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:05:58.675052   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:05:58.689944   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:05:58.689970   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:05:58.825396   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:05:57.838610   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.339329   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.656312   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.656996   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.691099   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.692040   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.192516   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:01.326375   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:01.340508   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:01.340570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:01.375429   73230 cri.go:89] found id: ""
	I0906 20:06:01.375460   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.375470   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:01.375478   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:01.375539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:01.410981   73230 cri.go:89] found id: ""
	I0906 20:06:01.411008   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.411019   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:01.411026   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:01.411083   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:01.448925   73230 cri.go:89] found id: ""
	I0906 20:06:01.448957   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.448968   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:01.448975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:01.449040   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:01.492063   73230 cri.go:89] found id: ""
	I0906 20:06:01.492094   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.492104   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:01.492112   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:01.492181   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:01.557779   73230 cri.go:89] found id: ""
	I0906 20:06:01.557812   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.557823   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:01.557830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:01.557892   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:01.604397   73230 cri.go:89] found id: ""
	I0906 20:06:01.604424   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.604432   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:01.604437   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:01.604482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:01.642249   73230 cri.go:89] found id: ""
	I0906 20:06:01.642280   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.642292   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:01.642300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:01.642364   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:01.692434   73230 cri.go:89] found id: ""
	I0906 20:06:01.692462   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.692474   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:01.692483   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:01.692498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:01.705860   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:01.705884   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:01.783929   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:01.783954   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:01.783965   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:01.864347   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:01.864385   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:01.902284   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:01.902311   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:04.456090   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:04.469775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:04.469840   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:04.505742   73230 cri.go:89] found id: ""
	I0906 20:06:04.505769   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.505778   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:04.505783   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:04.505835   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:04.541787   73230 cri.go:89] found id: ""
	I0906 20:06:04.541811   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.541819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:04.541824   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:04.541874   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:04.578775   73230 cri.go:89] found id: ""
	I0906 20:06:04.578806   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.578817   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:04.578825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:04.578885   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:04.614505   73230 cri.go:89] found id: ""
	I0906 20:06:04.614533   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.614542   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:04.614548   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:04.614594   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:04.652988   73230 cri.go:89] found id: ""
	I0906 20:06:04.653016   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.653027   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:04.653035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:04.653104   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:04.692380   73230 cri.go:89] found id: ""
	I0906 20:06:04.692408   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.692416   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:04.692423   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:04.692478   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:04.729846   73230 cri.go:89] found id: ""
	I0906 20:06:04.729869   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.729880   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:04.729887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:04.729953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:04.766341   73230 cri.go:89] found id: ""
	I0906 20:06:04.766370   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.766379   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:04.766390   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:04.766405   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:04.779801   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:04.779828   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:04.855313   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:04.855334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:04.855346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:04.934210   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:04.934246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:04.975589   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:04.975621   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:02.839427   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:04.840404   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.158048   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.655510   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.192558   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.692755   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.528622   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:07.544085   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:07.544156   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:07.588106   73230 cri.go:89] found id: ""
	I0906 20:06:07.588139   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.588149   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:07.588157   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:07.588210   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:07.630440   73230 cri.go:89] found id: ""
	I0906 20:06:07.630476   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.630494   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:07.630500   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:07.630551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:07.668826   73230 cri.go:89] found id: ""
	I0906 20:06:07.668870   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.668889   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:07.668898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:07.668962   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:07.706091   73230 cri.go:89] found id: ""
	I0906 20:06:07.706118   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.706130   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:07.706138   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:07.706196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:07.741679   73230 cri.go:89] found id: ""
	I0906 20:06:07.741708   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.741719   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:07.741726   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:07.741792   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:07.778240   73230 cri.go:89] found id: ""
	I0906 20:06:07.778277   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.778288   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:07.778296   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:07.778352   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:07.813183   73230 cri.go:89] found id: ""
	I0906 20:06:07.813212   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.813224   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:07.813232   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:07.813294   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:07.853938   73230 cri.go:89] found id: ""
	I0906 20:06:07.853970   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.853980   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:07.853988   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:07.854001   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:07.893540   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:07.893567   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:07.944219   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:07.944262   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:07.959601   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:07.959635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:08.034487   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:08.034513   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:08.034529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:07.339634   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:09.838953   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.658315   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.157980   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.192738   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.691823   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.611413   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:10.625273   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:10.625353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:10.664568   73230 cri.go:89] found id: ""
	I0906 20:06:10.664597   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.664609   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:10.664617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:10.664680   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:10.702743   73230 cri.go:89] found id: ""
	I0906 20:06:10.702772   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.702783   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:10.702790   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:10.702850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:10.739462   73230 cri.go:89] found id: ""
	I0906 20:06:10.739487   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.739504   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:10.739511   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:10.739572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:10.776316   73230 cri.go:89] found id: ""
	I0906 20:06:10.776344   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.776355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:10.776362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:10.776420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:10.809407   73230 cri.go:89] found id: ""
	I0906 20:06:10.809440   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.809451   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:10.809459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:10.809519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:10.844736   73230 cri.go:89] found id: ""
	I0906 20:06:10.844765   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.844777   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:10.844784   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:10.844851   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:10.880658   73230 cri.go:89] found id: ""
	I0906 20:06:10.880685   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.880693   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:10.880698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:10.880753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:10.917032   73230 cri.go:89] found id: ""
	I0906 20:06:10.917063   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.917074   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:10.917085   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:10.917100   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:10.980241   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:10.980272   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:10.995389   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:10.995435   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:11.070285   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:11.070313   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:11.070328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:11.155574   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:11.155607   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:13.703712   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:13.718035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:13.718093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:13.753578   73230 cri.go:89] found id: ""
	I0906 20:06:13.753603   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.753611   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:13.753617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:13.753659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:13.790652   73230 cri.go:89] found id: ""
	I0906 20:06:13.790681   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.790691   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:13.790697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:13.790749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:13.824243   73230 cri.go:89] found id: ""
	I0906 20:06:13.824278   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.824288   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:13.824293   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:13.824342   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:13.859647   73230 cri.go:89] found id: ""
	I0906 20:06:13.859691   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.859702   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:13.859721   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:13.859781   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:13.897026   73230 cri.go:89] found id: ""
	I0906 20:06:13.897061   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.897068   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:13.897075   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:13.897131   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:13.933904   73230 cri.go:89] found id: ""
	I0906 20:06:13.933927   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.933935   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:13.933941   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:13.933986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:13.969168   73230 cri.go:89] found id: ""
	I0906 20:06:13.969198   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.969210   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:13.969218   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:13.969295   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:14.005808   73230 cri.go:89] found id: ""
	I0906 20:06:14.005838   73230 logs.go:276] 0 containers: []
	W0906 20:06:14.005849   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:14.005862   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:14.005878   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:14.060878   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:14.060915   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:14.075388   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:14.075414   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:14.144942   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:14.144966   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:14.144981   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:14.233088   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:14.233139   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:12.338579   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.839062   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.655992   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.657020   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.157119   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.692103   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.193196   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:16.776744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:16.790292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:16.790384   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:16.828877   73230 cri.go:89] found id: ""
	I0906 20:06:16.828910   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.828921   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:16.828929   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:16.829016   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:16.864413   73230 cri.go:89] found id: ""
	I0906 20:06:16.864440   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.864449   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:16.864455   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:16.864525   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:16.908642   73230 cri.go:89] found id: ""
	I0906 20:06:16.908676   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.908687   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:16.908694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:16.908748   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:16.952247   73230 cri.go:89] found id: ""
	I0906 20:06:16.952278   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.952286   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:16.952292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:16.952343   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:16.990986   73230 cri.go:89] found id: ""
	I0906 20:06:16.991013   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.991022   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:16.991028   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:16.991077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:17.031002   73230 cri.go:89] found id: ""
	I0906 20:06:17.031034   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.031045   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:17.031052   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:17.031114   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:17.077533   73230 cri.go:89] found id: ""
	I0906 20:06:17.077560   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.077572   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:17.077579   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:17.077646   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:17.116770   73230 cri.go:89] found id: ""
	I0906 20:06:17.116798   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.116806   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:17.116817   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:17.116834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.169300   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:17.169337   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:17.184266   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:17.184299   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:17.266371   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:17.266400   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:17.266419   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:17.343669   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:17.343698   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:19.886541   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:19.899891   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:19.899951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:19.946592   73230 cri.go:89] found id: ""
	I0906 20:06:19.946621   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.946630   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:19.946636   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:19.946686   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:19.981758   73230 cri.go:89] found id: ""
	I0906 20:06:19.981788   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.981797   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:19.981802   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:19.981854   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:20.018372   73230 cri.go:89] found id: ""
	I0906 20:06:20.018397   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.018405   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:20.018411   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:20.018460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:20.054380   73230 cri.go:89] found id: ""
	I0906 20:06:20.054428   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.054440   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:20.054449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:20.054521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:20.092343   73230 cri.go:89] found id: ""
	I0906 20:06:20.092376   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.092387   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:20.092395   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:20.092463   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:20.128568   73230 cri.go:89] found id: ""
	I0906 20:06:20.128594   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.128604   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:20.128610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:20.128657   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:20.166018   73230 cri.go:89] found id: ""
	I0906 20:06:20.166046   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.166057   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:20.166072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:20.166125   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:20.203319   73230 cri.go:89] found id: ""
	I0906 20:06:20.203347   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.203355   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:20.203365   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:20.203381   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:20.287217   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:20.287243   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:20.287259   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:20.372799   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:20.372834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:20.416595   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:20.416620   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.338546   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.342409   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.838689   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.657411   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:22.157972   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.691327   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.692066   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:20.468340   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:20.468378   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:22.983259   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:22.997014   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:22.997098   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:23.034483   73230 cri.go:89] found id: ""
	I0906 20:06:23.034513   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.034524   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:23.034531   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:23.034597   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:23.072829   73230 cri.go:89] found id: ""
	I0906 20:06:23.072867   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.072878   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:23.072885   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:23.072949   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:23.110574   73230 cri.go:89] found id: ""
	I0906 20:06:23.110602   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.110613   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:23.110620   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:23.110684   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:23.149506   73230 cri.go:89] found id: ""
	I0906 20:06:23.149538   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.149550   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:23.149557   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:23.149619   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:23.191321   73230 cri.go:89] found id: ""
	I0906 20:06:23.191355   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.191367   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:23.191374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:23.191441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:23.233737   73230 cri.go:89] found id: ""
	I0906 20:06:23.233770   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.233791   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:23.233800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:23.233873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:23.270013   73230 cri.go:89] found id: ""
	I0906 20:06:23.270048   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.270060   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:23.270068   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:23.270127   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:23.309517   73230 cri.go:89] found id: ""
	I0906 20:06:23.309541   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.309549   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:23.309566   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:23.309578   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:23.380645   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:23.380675   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:23.380690   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:23.463656   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:23.463696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:23.504100   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:23.504134   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:23.557438   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:23.557483   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:23.841101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.340722   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.658261   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:27.155171   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.193829   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.690602   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.074045   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:26.088006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:26.088072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:26.124445   73230 cri.go:89] found id: ""
	I0906 20:06:26.124469   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.124476   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:26.124482   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:26.124537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:26.158931   73230 cri.go:89] found id: ""
	I0906 20:06:26.158957   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.158968   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:26.158975   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:26.159035   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:26.197125   73230 cri.go:89] found id: ""
	I0906 20:06:26.197154   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.197164   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:26.197171   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:26.197234   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:26.233241   73230 cri.go:89] found id: ""
	I0906 20:06:26.233278   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.233291   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:26.233300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:26.233366   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:26.269910   73230 cri.go:89] found id: ""
	I0906 20:06:26.269943   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.269955   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:26.269962   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:26.270026   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:26.308406   73230 cri.go:89] found id: ""
	I0906 20:06:26.308439   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.308450   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:26.308459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:26.308521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:26.344248   73230 cri.go:89] found id: ""
	I0906 20:06:26.344276   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.344288   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:26.344295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:26.344353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:26.391794   73230 cri.go:89] found id: ""
	I0906 20:06:26.391827   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.391840   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:26.391851   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:26.391866   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:26.444192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:26.444231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:26.459113   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:26.459144   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:26.533920   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:26.533945   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:26.533960   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:26.616382   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:26.616416   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:29.160429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:29.175007   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:29.175063   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:29.212929   73230 cri.go:89] found id: ""
	I0906 20:06:29.212961   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.212972   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:29.212980   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:29.213042   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:29.250777   73230 cri.go:89] found id: ""
	I0906 20:06:29.250806   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.250815   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:29.250821   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:29.250870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:29.292222   73230 cri.go:89] found id: ""
	I0906 20:06:29.292253   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.292262   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:29.292268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:29.292331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:29.328379   73230 cri.go:89] found id: ""
	I0906 20:06:29.328413   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.328431   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:29.328436   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:29.328482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:29.366792   73230 cri.go:89] found id: ""
	I0906 20:06:29.366822   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.366834   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:29.366841   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:29.366903   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:29.402233   73230 cri.go:89] found id: ""
	I0906 20:06:29.402261   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.402270   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:29.402276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:29.402331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:29.436695   73230 cri.go:89] found id: ""
	I0906 20:06:29.436724   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.436731   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:29.436736   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:29.436787   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:29.473050   73230 cri.go:89] found id: ""
	I0906 20:06:29.473074   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.473082   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:29.473091   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:29.473101   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:29.524981   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:29.525018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:29.538698   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:29.538722   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:29.611026   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:29.611049   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:29.611064   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:29.686898   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:29.686931   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:28.839118   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:30.839532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:29.156985   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.656552   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:28.694188   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.191032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.192623   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:32.228399   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:32.244709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:32.244775   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:32.285681   73230 cri.go:89] found id: ""
	I0906 20:06:32.285713   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.285724   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:32.285732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:32.285794   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:32.325312   73230 cri.go:89] found id: ""
	I0906 20:06:32.325340   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.325349   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:32.325355   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:32.325400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:32.361420   73230 cri.go:89] found id: ""
	I0906 20:06:32.361455   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.361468   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:32.361477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:32.361543   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:32.398881   73230 cri.go:89] found id: ""
	I0906 20:06:32.398956   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.398971   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:32.398984   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:32.399041   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:32.435336   73230 cri.go:89] found id: ""
	I0906 20:06:32.435362   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.435370   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:32.435375   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:32.435427   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:32.472849   73230 cri.go:89] found id: ""
	I0906 20:06:32.472900   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.472909   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:32.472914   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:32.472964   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:32.508176   73230 cri.go:89] found id: ""
	I0906 20:06:32.508199   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.508208   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:32.508213   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:32.508271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:32.550519   73230 cri.go:89] found id: ""
	I0906 20:06:32.550550   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.550561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:32.550576   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:32.550593   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:32.601362   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:32.601394   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:32.614821   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:32.614849   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:32.686044   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:32.686061   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:32.686074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:32.767706   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:32.767744   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:35.309159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:35.322386   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:35.322462   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:35.362909   73230 cri.go:89] found id: ""
	I0906 20:06:35.362937   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.362948   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:35.362955   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:35.363017   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:35.400591   73230 cri.go:89] found id: ""
	I0906 20:06:35.400621   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.400629   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:35.400635   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:35.400682   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:35.436547   73230 cri.go:89] found id: ""
	I0906 20:06:35.436578   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.436589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:35.436596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:35.436666   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:33.338812   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.340154   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.656782   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.657043   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.691312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:37.691358   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.473130   73230 cri.go:89] found id: ""
	I0906 20:06:35.473155   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.473163   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:35.473168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:35.473244   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:35.509646   73230 cri.go:89] found id: ""
	I0906 20:06:35.509677   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.509687   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:35.509695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:35.509754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:35.547651   73230 cri.go:89] found id: ""
	I0906 20:06:35.547684   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.547696   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:35.547703   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:35.547761   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:35.608590   73230 cri.go:89] found id: ""
	I0906 20:06:35.608614   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.608624   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:35.608631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:35.608691   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:35.651508   73230 cri.go:89] found id: ""
	I0906 20:06:35.651550   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.651561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:35.651572   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:35.651585   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:35.705502   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:35.705542   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:35.719550   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:35.719577   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:35.791435   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:35.791461   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:35.791476   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:35.869018   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:35.869070   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:38.411587   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:38.425739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:38.425800   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:38.463534   73230 cri.go:89] found id: ""
	I0906 20:06:38.463560   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.463571   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:38.463578   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:38.463628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:38.499238   73230 cri.go:89] found id: ""
	I0906 20:06:38.499269   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.499280   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:38.499287   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:38.499340   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:38.536297   73230 cri.go:89] found id: ""
	I0906 20:06:38.536334   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.536345   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:38.536352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:38.536417   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:38.573672   73230 cri.go:89] found id: ""
	I0906 20:06:38.573701   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.573712   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:38.573720   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:38.573779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:38.610913   73230 cri.go:89] found id: ""
	I0906 20:06:38.610937   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.610945   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:38.610950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:38.610996   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:38.647335   73230 cri.go:89] found id: ""
	I0906 20:06:38.647359   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.647368   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:38.647374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:38.647418   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:38.684054   73230 cri.go:89] found id: ""
	I0906 20:06:38.684084   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.684097   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:38.684106   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:38.684174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:38.731134   73230 cri.go:89] found id: ""
	I0906 20:06:38.731161   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.731173   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:38.731183   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:38.731199   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:38.787757   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:38.787798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:38.802920   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:38.802955   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:38.889219   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:38.889246   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:38.889261   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:38.964999   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:38.965042   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:37.838886   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.338914   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:38.156615   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.656577   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:39.691609   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.692330   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.504406   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:41.518111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:41.518169   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:41.558701   73230 cri.go:89] found id: ""
	I0906 20:06:41.558727   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.558738   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:41.558746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:41.558807   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:41.595986   73230 cri.go:89] found id: ""
	I0906 20:06:41.596009   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.596017   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:41.596023   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:41.596070   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:41.631462   73230 cri.go:89] found id: ""
	I0906 20:06:41.631486   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.631494   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:41.631504   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:41.631559   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:41.669646   73230 cri.go:89] found id: ""
	I0906 20:06:41.669674   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.669686   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:41.669693   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:41.669754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:41.708359   73230 cri.go:89] found id: ""
	I0906 20:06:41.708383   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.708391   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:41.708398   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:41.708446   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:41.745712   73230 cri.go:89] found id: ""
	I0906 20:06:41.745737   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.745750   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:41.745756   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:41.745804   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:41.781862   73230 cri.go:89] found id: ""
	I0906 20:06:41.781883   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.781892   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:41.781898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:41.781946   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:41.816687   73230 cri.go:89] found id: ""
	I0906 20:06:41.816714   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.816722   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:41.816730   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:41.816742   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:41.830115   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:41.830145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:41.908303   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:41.908334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:41.908348   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:42.001459   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:42.001501   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:42.061341   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:42.061368   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:44.619574   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:44.633355   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:44.633423   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:44.668802   73230 cri.go:89] found id: ""
	I0906 20:06:44.668834   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.668845   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:44.668852   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:44.668924   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:44.707613   73230 cri.go:89] found id: ""
	I0906 20:06:44.707639   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.707650   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:44.707657   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:44.707727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:44.744202   73230 cri.go:89] found id: ""
	I0906 20:06:44.744231   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.744243   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:44.744250   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:44.744311   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:44.783850   73230 cri.go:89] found id: ""
	I0906 20:06:44.783873   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.783881   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:44.783886   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:44.783938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:44.824986   73230 cri.go:89] found id: ""
	I0906 20:06:44.825011   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.825019   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:44.825025   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:44.825073   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:44.865157   73230 cri.go:89] found id: ""
	I0906 20:06:44.865182   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.865190   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:44.865196   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:44.865258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:44.908268   73230 cri.go:89] found id: ""
	I0906 20:06:44.908295   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.908305   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:44.908312   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:44.908359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:44.948669   73230 cri.go:89] found id: ""
	I0906 20:06:44.948697   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.948706   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:44.948716   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:44.948731   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:44.961862   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:44.961887   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:45.036756   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:45.036783   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:45.036801   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:45.116679   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:45.116717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:45.159756   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:45.159784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:42.339271   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.839443   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:43.155878   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:45.158884   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.192211   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:46.692140   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.714682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:47.730754   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:47.730820   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:47.783208   73230 cri.go:89] found id: ""
	I0906 20:06:47.783239   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.783249   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:47.783255   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:47.783312   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:47.844291   73230 cri.go:89] found id: ""
	I0906 20:06:47.844324   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.844336   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:47.844344   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:47.844407   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:47.881877   73230 cri.go:89] found id: ""
	I0906 20:06:47.881905   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.881913   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:47.881919   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:47.881986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:47.918034   73230 cri.go:89] found id: ""
	I0906 20:06:47.918058   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.918066   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:47.918072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:47.918126   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:47.957045   73230 cri.go:89] found id: ""
	I0906 20:06:47.957068   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.957077   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:47.957083   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:47.957134   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:47.993849   73230 cri.go:89] found id: ""
	I0906 20:06:47.993872   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.993883   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:47.993890   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:47.993951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:48.031214   73230 cri.go:89] found id: ""
	I0906 20:06:48.031239   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.031249   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:48.031257   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:48.031314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:48.064634   73230 cri.go:89] found id: ""
	I0906 20:06:48.064673   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.064690   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:48.064698   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:48.064710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:48.104307   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:48.104343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:48.158869   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:48.158900   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:48.173000   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:48.173026   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:48.248751   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:48.248774   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:48.248792   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:47.339014   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.339656   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.838817   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.656402   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.156349   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:52.156651   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.192411   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.691635   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.833490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:50.847618   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:50.847702   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:50.887141   73230 cri.go:89] found id: ""
	I0906 20:06:50.887167   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.887176   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:50.887181   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:50.887228   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:50.923435   73230 cri.go:89] found id: ""
	I0906 20:06:50.923480   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.923491   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:50.923499   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:50.923567   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:50.959704   73230 cri.go:89] found id: ""
	I0906 20:06:50.959730   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.959742   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:50.959748   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:50.959810   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:50.992994   73230 cri.go:89] found id: ""
	I0906 20:06:50.993023   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.993032   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:50.993037   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:50.993091   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:51.031297   73230 cri.go:89] found id: ""
	I0906 20:06:51.031321   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.031329   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:51.031335   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:51.031390   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:51.067698   73230 cri.go:89] found id: ""
	I0906 20:06:51.067721   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.067732   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:51.067739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:51.067799   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:51.102240   73230 cri.go:89] found id: ""
	I0906 20:06:51.102268   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.102278   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:51.102285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:51.102346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:51.137146   73230 cri.go:89] found id: ""
	I0906 20:06:51.137172   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.137183   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:51.137194   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:51.137209   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:51.216158   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:51.216194   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:51.256063   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:51.256088   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:51.309176   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:51.309210   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:51.323515   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:51.323544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:51.393281   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:53.893714   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:53.907807   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:53.907863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:53.947929   73230 cri.go:89] found id: ""
	I0906 20:06:53.947954   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.947962   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:53.947968   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:53.948014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:53.983005   73230 cri.go:89] found id: ""
	I0906 20:06:53.983028   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.983041   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:53.983046   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:53.983094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:54.019004   73230 cri.go:89] found id: ""
	I0906 20:06:54.019027   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.019035   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:54.019041   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:54.019094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:54.060240   73230 cri.go:89] found id: ""
	I0906 20:06:54.060266   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.060279   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:54.060285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:54.060336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:54.096432   73230 cri.go:89] found id: ""
	I0906 20:06:54.096461   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.096469   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:54.096475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:54.096537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:54.132992   73230 cri.go:89] found id: ""
	I0906 20:06:54.133021   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.133033   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:54.133040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:54.133103   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:54.172730   73230 cri.go:89] found id: ""
	I0906 20:06:54.172754   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.172766   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:54.172778   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:54.172839   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:54.212050   73230 cri.go:89] found id: ""
	I0906 20:06:54.212191   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.212202   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:54.212212   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:54.212234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:54.263603   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:54.263647   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:54.281291   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:54.281324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:54.359523   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:54.359545   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:54.359568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:54.442230   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:54.442265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:54.339159   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.841459   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.157379   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.656134   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.191878   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.691766   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.983744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:56.997451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:56.997527   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:57.034792   73230 cri.go:89] found id: ""
	I0906 20:06:57.034817   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.034825   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:57.034831   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:57.034883   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:57.073709   73230 cri.go:89] found id: ""
	I0906 20:06:57.073735   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.073745   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:57.073751   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:57.073803   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:57.122758   73230 cri.go:89] found id: ""
	I0906 20:06:57.122787   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.122798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:57.122808   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:57.122865   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:57.158208   73230 cri.go:89] found id: ""
	I0906 20:06:57.158242   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.158252   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:57.158262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:57.158323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:57.194004   73230 cri.go:89] found id: ""
	I0906 20:06:57.194029   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.194037   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:57.194044   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:57.194099   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:57.230068   73230 cri.go:89] found id: ""
	I0906 20:06:57.230099   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.230111   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:57.230119   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:57.230186   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:57.265679   73230 cri.go:89] found id: ""
	I0906 20:06:57.265707   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.265718   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:57.265735   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:57.265801   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:57.304917   73230 cri.go:89] found id: ""
	I0906 20:06:57.304946   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.304956   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:57.304967   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:57.304980   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:57.357238   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:57.357276   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:57.371648   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:57.371674   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:57.438572   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:57.438590   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:57.438602   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:57.528212   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:57.528256   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:00.071140   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:00.084975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:00.085055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:00.119680   73230 cri.go:89] found id: ""
	I0906 20:07:00.119713   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.119725   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:00.119732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:00.119786   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:00.155678   73230 cri.go:89] found id: ""
	I0906 20:07:00.155704   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.155716   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:00.155723   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:00.155769   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:00.190758   73230 cri.go:89] found id: ""
	I0906 20:07:00.190783   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.190793   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:00.190799   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:00.190863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:00.228968   73230 cri.go:89] found id: ""
	I0906 20:07:00.228999   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.229010   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:00.229018   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:00.229079   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:00.265691   73230 cri.go:89] found id: ""
	I0906 20:07:00.265722   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.265733   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:00.265741   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:00.265806   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:00.305785   73230 cri.go:89] found id: ""
	I0906 20:07:00.305812   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.305820   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:00.305825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:00.305872   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:00.341872   73230 cri.go:89] found id: ""
	I0906 20:07:00.341895   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.341902   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:00.341907   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:00.341955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:00.377661   73230 cri.go:89] found id: ""
	I0906 20:07:00.377690   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.377702   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:00.377712   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:00.377725   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:00.428215   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:00.428254   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:00.443135   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:00.443165   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 20:06:59.337996   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.338924   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:58.657236   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.156973   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:59.191556   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.192082   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.193511   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	W0906 20:07:00.518745   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:00.518768   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:00.518781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:00.604413   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:00.604448   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.146657   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:03.160610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:03.160665   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:03.200916   73230 cri.go:89] found id: ""
	I0906 20:07:03.200950   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.200960   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:03.200967   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:03.201029   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:03.239550   73230 cri.go:89] found id: ""
	I0906 20:07:03.239579   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.239592   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:03.239600   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:03.239660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:03.278216   73230 cri.go:89] found id: ""
	I0906 20:07:03.278244   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.278255   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:03.278263   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:03.278325   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:03.315028   73230 cri.go:89] found id: ""
	I0906 20:07:03.315059   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.315073   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:03.315080   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:03.315146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:03.354614   73230 cri.go:89] found id: ""
	I0906 20:07:03.354638   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.354647   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:03.354652   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:03.354710   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:03.390105   73230 cri.go:89] found id: ""
	I0906 20:07:03.390129   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.390138   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:03.390144   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:03.390190   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:03.427651   73230 cri.go:89] found id: ""
	I0906 20:07:03.427679   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.427687   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:03.427695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:03.427763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:03.463191   73230 cri.go:89] found id: ""
	I0906 20:07:03.463220   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.463230   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:03.463242   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:03.463288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:03.476966   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:03.476995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:03.558415   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:03.558441   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:03.558457   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:03.641528   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:03.641564   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.680916   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:03.680943   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:03.339511   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.340113   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.157907   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.160507   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.692151   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:08.191782   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:06.235947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:06.249589   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:06.249667   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:06.289193   73230 cri.go:89] found id: ""
	I0906 20:07:06.289223   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.289235   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:06.289242   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:06.289305   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:06.324847   73230 cri.go:89] found id: ""
	I0906 20:07:06.324887   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.324898   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:06.324904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:06.324966   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:06.361755   73230 cri.go:89] found id: ""
	I0906 20:07:06.361786   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.361798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:06.361806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:06.361873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:06.397739   73230 cri.go:89] found id: ""
	I0906 20:07:06.397766   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.397775   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:06.397780   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:06.397833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:06.432614   73230 cri.go:89] found id: ""
	I0906 20:07:06.432641   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.432649   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:06.432655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:06.432703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:06.467784   73230 cri.go:89] found id: ""
	I0906 20:07:06.467812   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.467823   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:06.467830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:06.467890   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:06.507055   73230 cri.go:89] found id: ""
	I0906 20:07:06.507085   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.507096   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:06.507104   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:06.507165   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:06.544688   73230 cri.go:89] found id: ""
	I0906 20:07:06.544720   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.544730   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:06.544740   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:06.544751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:06.597281   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:06.597314   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:06.612749   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:06.612774   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:06.684973   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:06.684993   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:06.685006   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:06.764306   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:06.764345   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.304340   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:09.317460   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:09.317536   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:09.354289   73230 cri.go:89] found id: ""
	I0906 20:07:09.354312   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.354322   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:09.354327   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:09.354373   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:09.390962   73230 cri.go:89] found id: ""
	I0906 20:07:09.390997   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.391008   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:09.391015   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:09.391076   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:09.427456   73230 cri.go:89] found id: ""
	I0906 20:07:09.427491   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.427502   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:09.427510   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:09.427572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:09.462635   73230 cri.go:89] found id: ""
	I0906 20:07:09.462667   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.462680   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:09.462687   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:09.462749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:09.506726   73230 cri.go:89] found id: ""
	I0906 20:07:09.506751   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.506767   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:09.506775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:09.506836   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:09.541974   73230 cri.go:89] found id: ""
	I0906 20:07:09.541999   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.542009   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:09.542017   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:09.542077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:09.580069   73230 cri.go:89] found id: ""
	I0906 20:07:09.580104   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.580115   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:09.580123   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:09.580182   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:09.616025   73230 cri.go:89] found id: ""
	I0906 20:07:09.616054   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.616065   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:09.616075   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:09.616090   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:09.630967   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:09.630993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:09.716733   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:09.716766   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:09.716782   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:09.792471   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:09.792503   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.832326   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:09.832357   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:07.840909   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.339239   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:07.655710   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:09.656069   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:11.656458   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.192155   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.192716   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.385565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:12.398694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:12.398768   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:12.437446   73230 cri.go:89] found id: ""
	I0906 20:07:12.437473   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.437482   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:12.437487   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:12.437555   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:12.473328   73230 cri.go:89] found id: ""
	I0906 20:07:12.473355   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.473362   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:12.473372   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:12.473429   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:12.510935   73230 cri.go:89] found id: ""
	I0906 20:07:12.510962   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.510972   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:12.510979   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:12.511044   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:12.547961   73230 cri.go:89] found id: ""
	I0906 20:07:12.547991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.547999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:12.548005   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:12.548062   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:12.585257   73230 cri.go:89] found id: ""
	I0906 20:07:12.585291   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.585302   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:12.585309   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:12.585369   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:12.623959   73230 cri.go:89] found id: ""
	I0906 20:07:12.623991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.624003   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:12.624010   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:12.624066   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:12.662795   73230 cri.go:89] found id: ""
	I0906 20:07:12.662822   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.662832   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:12.662840   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:12.662896   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:12.700941   73230 cri.go:89] found id: ""
	I0906 20:07:12.700967   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.700974   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:12.700983   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:12.700994   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:12.785989   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:12.786025   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:12.826678   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:12.826704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:12.881558   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:12.881599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:12.896035   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:12.896065   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:12.970721   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:12.839031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.339615   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:13.656809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.657470   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:14.691032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:16.692697   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.471171   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:15.484466   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:15.484541   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:15.518848   73230 cri.go:89] found id: ""
	I0906 20:07:15.518875   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.518886   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:15.518894   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:15.518953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:15.553444   73230 cri.go:89] found id: ""
	I0906 20:07:15.553468   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.553476   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:15.553482   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:15.553528   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:15.589136   73230 cri.go:89] found id: ""
	I0906 20:07:15.589160   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.589168   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:15.589173   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:15.589220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:15.624410   73230 cri.go:89] found id: ""
	I0906 20:07:15.624434   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.624443   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:15.624449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:15.624492   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:15.661506   73230 cri.go:89] found id: ""
	I0906 20:07:15.661535   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.661547   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:15.661555   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:15.661615   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:15.699126   73230 cri.go:89] found id: ""
	I0906 20:07:15.699148   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.699155   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:15.699161   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:15.699207   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:15.736489   73230 cri.go:89] found id: ""
	I0906 20:07:15.736523   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.736534   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:15.736542   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:15.736604   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:15.771988   73230 cri.go:89] found id: ""
	I0906 20:07:15.772013   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.772020   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:15.772029   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:15.772045   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:15.822734   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:15.822765   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:15.836820   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:15.836872   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:15.915073   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:15.915111   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:15.915126   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:15.988476   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:15.988514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:18.528710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:18.541450   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:18.541526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:18.581278   73230 cri.go:89] found id: ""
	I0906 20:07:18.581308   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.581317   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:18.581323   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:18.581381   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:18.616819   73230 cri.go:89] found id: ""
	I0906 20:07:18.616843   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.616850   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:18.616871   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:18.616923   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:18.655802   73230 cri.go:89] found id: ""
	I0906 20:07:18.655827   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.655842   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:18.655849   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:18.655908   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:18.693655   73230 cri.go:89] found id: ""
	I0906 20:07:18.693679   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.693689   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:18.693696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:18.693779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:18.730882   73230 cri.go:89] found id: ""
	I0906 20:07:18.730914   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.730924   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:18.730931   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:18.730994   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:18.767219   73230 cri.go:89] found id: ""
	I0906 20:07:18.767243   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.767250   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:18.767256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:18.767316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:18.802207   73230 cri.go:89] found id: ""
	I0906 20:07:18.802230   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.802238   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:18.802243   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:18.802300   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:18.840449   73230 cri.go:89] found id: ""
	I0906 20:07:18.840471   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.840481   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:18.840491   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:18.840504   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:18.892430   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:18.892469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:18.906527   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:18.906561   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:18.980462   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:18.980483   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:18.980494   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:19.059550   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:19.059588   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:17.340292   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:19.840090   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.156486   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:20.657764   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.693021   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.191529   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.191865   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.599879   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:21.614131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:21.614205   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:21.650887   73230 cri.go:89] found id: ""
	I0906 20:07:21.650910   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.650919   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:21.650924   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:21.650978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:21.684781   73230 cri.go:89] found id: ""
	I0906 20:07:21.684809   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.684819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:21.684827   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:21.684907   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:21.722685   73230 cri.go:89] found id: ""
	I0906 20:07:21.722711   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.722722   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:21.722729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:21.722791   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:21.757581   73230 cri.go:89] found id: ""
	I0906 20:07:21.757607   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.757616   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:21.757622   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:21.757670   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:21.791984   73230 cri.go:89] found id: ""
	I0906 20:07:21.792008   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.792016   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:21.792022   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:21.792072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:21.853612   73230 cri.go:89] found id: ""
	I0906 20:07:21.853636   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.853644   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:21.853650   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:21.853699   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:21.894184   73230 cri.go:89] found id: ""
	I0906 20:07:21.894232   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.894247   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:21.894256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:21.894318   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:21.930731   73230 cri.go:89] found id: ""
	I0906 20:07:21.930758   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.930768   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:21.930779   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:21.930798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:21.969174   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:21.969207   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:22.017647   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:22.017680   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:22.033810   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:22.033852   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:22.111503   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:22.111530   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:22.111544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:24.696348   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:24.710428   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:24.710506   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:24.747923   73230 cri.go:89] found id: ""
	I0906 20:07:24.747958   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.747969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:24.747977   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:24.748037   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:24.782216   73230 cri.go:89] found id: ""
	I0906 20:07:24.782250   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.782260   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:24.782268   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:24.782329   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:24.822093   73230 cri.go:89] found id: ""
	I0906 20:07:24.822126   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.822137   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:24.822148   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:24.822217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:24.857166   73230 cri.go:89] found id: ""
	I0906 20:07:24.857202   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.857213   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:24.857224   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:24.857314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:24.892575   73230 cri.go:89] found id: ""
	I0906 20:07:24.892610   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.892621   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:24.892629   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:24.892689   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:24.929102   73230 cri.go:89] found id: ""
	I0906 20:07:24.929130   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.929140   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:24.929149   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:24.929206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:24.964224   73230 cri.go:89] found id: ""
	I0906 20:07:24.964257   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.964268   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:24.964276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:24.964337   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:25.000453   73230 cri.go:89] found id: ""
	I0906 20:07:25.000475   73230 logs.go:276] 0 containers: []
	W0906 20:07:25.000485   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:25.000496   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:25.000511   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:25.041824   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:25.041851   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:25.093657   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:25.093692   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:25.107547   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:25.107576   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:25.178732   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:25.178755   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:25.178771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:22.338864   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:24.339432   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:26.838165   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.156449   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.156979   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.158086   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.192653   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.693480   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.764271   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:27.777315   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:27.777389   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:27.812621   73230 cri.go:89] found id: ""
	I0906 20:07:27.812644   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.812655   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:27.812663   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:27.812718   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:27.853063   73230 cri.go:89] found id: ""
	I0906 20:07:27.853093   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.853104   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:27.853112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:27.853171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:27.894090   73230 cri.go:89] found id: ""
	I0906 20:07:27.894118   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.894130   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:27.894137   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:27.894196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:27.930764   73230 cri.go:89] found id: ""
	I0906 20:07:27.930791   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.930802   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:27.930809   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:27.930870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:27.967011   73230 cri.go:89] found id: ""
	I0906 20:07:27.967036   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.967047   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:27.967053   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:27.967111   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:28.002119   73230 cri.go:89] found id: ""
	I0906 20:07:28.002146   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.002157   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:28.002164   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:28.002226   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:28.043884   73230 cri.go:89] found id: ""
	I0906 20:07:28.043909   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.043917   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:28.043923   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:28.043979   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:28.081510   73230 cri.go:89] found id: ""
	I0906 20:07:28.081538   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.081547   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:28.081557   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:28.081568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:28.159077   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:28.159109   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:28.207489   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:28.207527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:28.267579   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:28.267613   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:28.287496   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:28.287529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:28.376555   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:28.838301   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.843091   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:29.655598   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:31.657757   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.192112   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:32.692354   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.876683   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:30.890344   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:30.890424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:30.930618   73230 cri.go:89] found id: ""
	I0906 20:07:30.930647   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.930658   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:30.930666   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:30.930727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:30.968801   73230 cri.go:89] found id: ""
	I0906 20:07:30.968825   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.968834   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:30.968839   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:30.968911   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:31.006437   73230 cri.go:89] found id: ""
	I0906 20:07:31.006463   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.006472   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:31.006477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:31.006531   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:31.042091   73230 cri.go:89] found id: ""
	I0906 20:07:31.042117   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.042125   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:31.042131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:31.042177   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:31.079244   73230 cri.go:89] found id: ""
	I0906 20:07:31.079271   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.079280   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:31.079286   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:31.079336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:31.116150   73230 cri.go:89] found id: ""
	I0906 20:07:31.116174   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.116182   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:31.116188   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:31.116240   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:31.151853   73230 cri.go:89] found id: ""
	I0906 20:07:31.151877   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.151886   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:31.151892   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:31.151939   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:31.189151   73230 cri.go:89] found id: ""
	I0906 20:07:31.189181   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.189192   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:31.189203   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:31.189218   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:31.234466   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:31.234493   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:31.286254   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:31.286288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:31.300500   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:31.300525   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:31.372968   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:31.372987   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:31.372997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:33.949865   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:33.964791   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:33.964849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:34.027049   73230 cri.go:89] found id: ""
	I0906 20:07:34.027082   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.027094   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:34.027102   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:34.027162   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:34.080188   73230 cri.go:89] found id: ""
	I0906 20:07:34.080218   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.080230   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:34.080237   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:34.080320   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:34.124146   73230 cri.go:89] found id: ""
	I0906 20:07:34.124171   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.124179   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:34.124185   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:34.124230   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:34.161842   73230 cri.go:89] found id: ""
	I0906 20:07:34.161864   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.161872   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:34.161878   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:34.161938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:34.201923   73230 cri.go:89] found id: ""
	I0906 20:07:34.201951   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.201961   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:34.201967   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:34.202032   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:34.246609   73230 cri.go:89] found id: ""
	I0906 20:07:34.246644   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.246656   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:34.246665   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:34.246739   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:34.287616   73230 cri.go:89] found id: ""
	I0906 20:07:34.287646   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.287657   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:34.287663   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:34.287721   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:34.322270   73230 cri.go:89] found id: ""
	I0906 20:07:34.322297   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.322309   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:34.322320   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:34.322334   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:34.378598   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:34.378633   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:34.392748   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:34.392781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:34.468620   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:34.468648   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:34.468663   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:34.548290   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:34.548324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:33.339665   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.339890   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:34.157895   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:36.656829   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.192386   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.192574   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.095962   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:37.110374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:37.110459   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:37.146705   73230 cri.go:89] found id: ""
	I0906 20:07:37.146732   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.146740   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:37.146746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:37.146802   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:37.185421   73230 cri.go:89] found id: ""
	I0906 20:07:37.185449   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.185461   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:37.185468   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:37.185532   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:37.224767   73230 cri.go:89] found id: ""
	I0906 20:07:37.224793   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.224801   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:37.224806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:37.224884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:37.265392   73230 cri.go:89] found id: ""
	I0906 20:07:37.265422   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.265432   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:37.265438   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:37.265496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:37.302065   73230 cri.go:89] found id: ""
	I0906 20:07:37.302093   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.302101   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:37.302107   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:37.302171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:37.341466   73230 cri.go:89] found id: ""
	I0906 20:07:37.341493   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.341505   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:37.341513   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:37.341576   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.377701   73230 cri.go:89] found id: ""
	I0906 20:07:37.377724   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.377732   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:37.377738   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:37.377798   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:37.412927   73230 cri.go:89] found id: ""
	I0906 20:07:37.412955   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.412966   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:37.412977   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:37.412993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:37.427750   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:37.427776   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:37.500904   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:37.500928   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:37.500945   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:37.583204   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:37.583246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:37.623477   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:37.623512   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.179798   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:40.194295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:40.194372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:40.229731   73230 cri.go:89] found id: ""
	I0906 20:07:40.229768   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.229779   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:40.229787   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:40.229848   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:40.275909   73230 cri.go:89] found id: ""
	I0906 20:07:40.275943   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.275956   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:40.275964   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:40.276049   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:40.316552   73230 cri.go:89] found id: ""
	I0906 20:07:40.316585   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.316594   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:40.316599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:40.316647   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:40.355986   73230 cri.go:89] found id: ""
	I0906 20:07:40.356017   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.356028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:40.356036   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:40.356095   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:40.396486   73230 cri.go:89] found id: ""
	I0906 20:07:40.396522   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.396535   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:40.396544   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:40.396609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:40.440311   73230 cri.go:89] found id: ""
	I0906 20:07:40.440338   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.440346   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:40.440352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:40.440414   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.346532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.839521   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.156737   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.156967   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.691703   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.691972   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:40.476753   73230 cri.go:89] found id: ""
	I0906 20:07:40.476781   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.476790   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:40.476797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:40.476844   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:40.514462   73230 cri.go:89] found id: ""
	I0906 20:07:40.514489   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.514500   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:40.514511   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:40.514527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:40.553670   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:40.553700   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.608304   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:40.608343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:40.622486   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:40.622514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:40.699408   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:40.699434   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:40.699451   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.278892   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:43.292455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:43.292526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:43.328900   73230 cri.go:89] found id: ""
	I0906 20:07:43.328929   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.328940   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:43.328948   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:43.329009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:43.366728   73230 cri.go:89] found id: ""
	I0906 20:07:43.366754   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.366762   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:43.366768   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:43.366817   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:43.401566   73230 cri.go:89] found id: ""
	I0906 20:07:43.401590   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.401599   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:43.401604   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:43.401650   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:43.437022   73230 cri.go:89] found id: ""
	I0906 20:07:43.437051   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.437063   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:43.437072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:43.437140   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:43.473313   73230 cri.go:89] found id: ""
	I0906 20:07:43.473342   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.473354   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:43.473360   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:43.473420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:43.513590   73230 cri.go:89] found id: ""
	I0906 20:07:43.513616   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.513624   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:43.513630   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:43.513690   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:43.549974   73230 cri.go:89] found id: ""
	I0906 20:07:43.550011   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.550025   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:43.550032   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:43.550100   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:43.592386   73230 cri.go:89] found id: ""
	I0906 20:07:43.592426   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.592444   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:43.592454   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:43.592482   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:43.607804   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:43.607841   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:43.679533   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:43.679568   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:43.679580   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.762111   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:43.762145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:43.802883   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:43.802908   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:42.340252   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:44.838648   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.838831   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.157956   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.657410   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.693014   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.693640   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.191509   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.358429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:46.371252   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:46.371326   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:46.406397   73230 cri.go:89] found id: ""
	I0906 20:07:46.406420   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.406430   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:46.406437   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:46.406496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:46.452186   73230 cri.go:89] found id: ""
	I0906 20:07:46.452209   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.452218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:46.452223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:46.452288   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:46.489418   73230 cri.go:89] found id: ""
	I0906 20:07:46.489443   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.489454   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:46.489461   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:46.489523   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:46.529650   73230 cri.go:89] found id: ""
	I0906 20:07:46.529679   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.529690   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:46.529698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:46.529760   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:46.566429   73230 cri.go:89] found id: ""
	I0906 20:07:46.566454   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.566466   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:46.566474   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:46.566539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:46.604999   73230 cri.go:89] found id: ""
	I0906 20:07:46.605026   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.605034   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:46.605040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:46.605085   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:46.643116   73230 cri.go:89] found id: ""
	I0906 20:07:46.643144   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.643155   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:46.643162   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:46.643222   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:46.679734   73230 cri.go:89] found id: ""
	I0906 20:07:46.679756   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.679764   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:46.679772   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:46.679784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:46.736380   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:46.736430   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:46.750649   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:46.750681   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:46.833098   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:46.833130   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:46.833146   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:46.912223   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:46.912267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.453662   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:49.466520   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:49.466585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:49.508009   73230 cri.go:89] found id: ""
	I0906 20:07:49.508038   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.508049   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:49.508056   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:49.508119   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:49.545875   73230 cri.go:89] found id: ""
	I0906 20:07:49.545900   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.545911   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:49.545918   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:49.545978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:49.584899   73230 cri.go:89] found id: ""
	I0906 20:07:49.584926   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.584933   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:49.584940   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:49.585001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:49.621044   73230 cri.go:89] found id: ""
	I0906 20:07:49.621073   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.621085   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:49.621092   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:49.621146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:49.657074   73230 cri.go:89] found id: ""
	I0906 20:07:49.657099   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.657108   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:49.657115   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:49.657174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:49.693734   73230 cri.go:89] found id: ""
	I0906 20:07:49.693759   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.693767   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:49.693773   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:49.693827   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:49.729920   73230 cri.go:89] found id: ""
	I0906 20:07:49.729950   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.729960   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:49.729965   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:49.730014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:49.765282   73230 cri.go:89] found id: ""
	I0906 20:07:49.765313   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.765324   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:49.765335   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:49.765350   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:49.842509   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:49.842531   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:49.842543   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:49.920670   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:49.920704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.961193   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:49.961220   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:50.014331   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:50.014366   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:48.839877   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:51.339381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.156290   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.157337   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.692055   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:53.191487   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.529758   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:52.543533   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:52.543596   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:52.582802   73230 cri.go:89] found id: ""
	I0906 20:07:52.582826   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.582838   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:52.582845   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:52.582909   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:52.625254   73230 cri.go:89] found id: ""
	I0906 20:07:52.625287   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.625308   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:52.625317   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:52.625383   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:52.660598   73230 cri.go:89] found id: ""
	I0906 20:07:52.660621   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.660632   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:52.660640   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:52.660703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:52.702980   73230 cri.go:89] found id: ""
	I0906 20:07:52.703004   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.703014   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:52.703021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:52.703082   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:52.740361   73230 cri.go:89] found id: ""
	I0906 20:07:52.740387   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.740394   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:52.740400   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:52.740447   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:52.780011   73230 cri.go:89] found id: ""
	I0906 20:07:52.780043   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.780056   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:52.780063   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:52.780123   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:52.825546   73230 cri.go:89] found id: ""
	I0906 20:07:52.825583   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.825595   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:52.825602   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:52.825659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:52.864347   73230 cri.go:89] found id: ""
	I0906 20:07:52.864381   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.864393   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:52.864403   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:52.864417   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:52.943041   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:52.943077   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:52.986158   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:52.986185   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:53.039596   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:53.039635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:53.054265   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:53.054295   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:53.125160   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:53.339887   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.839233   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.657521   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.157101   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.192803   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.692328   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.626058   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:55.639631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:55.639705   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:55.677283   73230 cri.go:89] found id: ""
	I0906 20:07:55.677304   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.677312   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:55.677317   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:55.677372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:55.714371   73230 cri.go:89] found id: ""
	I0906 20:07:55.714402   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.714414   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:55.714422   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:55.714509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:55.753449   73230 cri.go:89] found id: ""
	I0906 20:07:55.753487   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.753500   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:55.753507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:55.753575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:55.792955   73230 cri.go:89] found id: ""
	I0906 20:07:55.792987   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.792999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:55.793006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:55.793074   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:55.827960   73230 cri.go:89] found id: ""
	I0906 20:07:55.827985   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.827996   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:55.828003   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:55.828052   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:55.867742   73230 cri.go:89] found id: ""
	I0906 20:07:55.867765   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.867778   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:55.867785   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:55.867849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:55.907328   73230 cri.go:89] found id: ""
	I0906 20:07:55.907352   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.907359   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:55.907365   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:55.907424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:55.946057   73230 cri.go:89] found id: ""
	I0906 20:07:55.946091   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.946099   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:55.946108   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:55.946119   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:56.033579   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:56.033598   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:56.033611   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:56.116337   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:56.116372   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:56.163397   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:56.163428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:56.217189   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:56.217225   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:58.736147   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:58.749729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:58.749833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:58.786375   73230 cri.go:89] found id: ""
	I0906 20:07:58.786399   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.786406   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:58.786412   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:58.786460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:58.825188   73230 cri.go:89] found id: ""
	I0906 20:07:58.825210   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.825218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:58.825223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:58.825271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:58.866734   73230 cri.go:89] found id: ""
	I0906 20:07:58.866756   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.866764   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:58.866769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:58.866823   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:58.909742   73230 cri.go:89] found id: ""
	I0906 20:07:58.909774   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.909785   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:58.909793   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:58.909850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:58.950410   73230 cri.go:89] found id: ""
	I0906 20:07:58.950438   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.950447   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:58.950452   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:58.950500   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:58.987431   73230 cri.go:89] found id: ""
	I0906 20:07:58.987454   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.987462   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:58.987468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:58.987518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:59.023432   73230 cri.go:89] found id: ""
	I0906 20:07:59.023462   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.023474   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:59.023482   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:59.023544   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:59.057695   73230 cri.go:89] found id: ""
	I0906 20:07:59.057724   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.057734   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:59.057743   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:59.057755   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:59.109634   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:59.109671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:59.125436   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:59.125479   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:59.202018   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:59.202040   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:59.202054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:59.281418   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:59.281456   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:58.339751   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.842794   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.658145   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.155679   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.157913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.192179   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.193068   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:01.823947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:01.839055   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:01.839115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:01.876178   73230 cri.go:89] found id: ""
	I0906 20:08:01.876206   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.876215   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:01.876220   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:01.876274   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:01.912000   73230 cri.go:89] found id: ""
	I0906 20:08:01.912028   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.912038   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:01.912045   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:01.912107   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:01.948382   73230 cri.go:89] found id: ""
	I0906 20:08:01.948412   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.948420   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:01.948426   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:01.948474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:01.982991   73230 cri.go:89] found id: ""
	I0906 20:08:01.983019   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.983028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:01.983033   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:01.983080   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:02.016050   73230 cri.go:89] found id: ""
	I0906 20:08:02.016076   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.016085   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:02.016091   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:02.016151   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:02.051087   73230 cri.go:89] found id: ""
	I0906 20:08:02.051125   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.051137   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:02.051150   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:02.051214   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:02.093230   73230 cri.go:89] found id: ""
	I0906 20:08:02.093254   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.093263   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:02.093268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:02.093323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:02.130580   73230 cri.go:89] found id: ""
	I0906 20:08:02.130609   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.130619   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:02.130629   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:02.130644   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:02.183192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:02.183231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:02.199079   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:02.199110   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:02.274259   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:02.274279   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:02.274303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:02.356198   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:02.356234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:04.899180   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:04.912879   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:04.912955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:04.950598   73230 cri.go:89] found id: ""
	I0906 20:08:04.950632   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.950642   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:04.950656   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:04.950713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:04.986474   73230 cri.go:89] found id: ""
	I0906 20:08:04.986504   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.986513   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:04.986519   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:04.986570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:05.025837   73230 cri.go:89] found id: ""
	I0906 20:08:05.025868   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.025877   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:05.025884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:05.025934   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:05.063574   73230 cri.go:89] found id: ""
	I0906 20:08:05.063613   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.063622   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:05.063628   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:05.063674   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:05.101341   73230 cri.go:89] found id: ""
	I0906 20:08:05.101371   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.101383   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:05.101390   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:05.101461   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:05.148551   73230 cri.go:89] found id: ""
	I0906 20:08:05.148580   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.148591   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:05.148599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:05.148668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:05.186907   73230 cri.go:89] found id: ""
	I0906 20:08:05.186935   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.186945   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:05.186953   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:05.187019   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:05.226237   73230 cri.go:89] found id: ""
	I0906 20:08:05.226265   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.226275   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:05.226287   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:05.226300   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:05.242892   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:05.242925   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:05.317797   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:05.317824   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:05.317839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:05.400464   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:05.400500   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:05.442632   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:05.442657   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:03.340541   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:05.840156   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.655913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:06.657424   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.691255   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.191739   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.998033   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:08.012363   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:08.012441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:08.048816   73230 cri.go:89] found id: ""
	I0906 20:08:08.048847   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.048876   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:08.048884   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:08.048947   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:08.109623   73230 cri.go:89] found id: ""
	I0906 20:08:08.109650   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.109661   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:08.109668   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:08.109730   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:08.145405   73230 cri.go:89] found id: ""
	I0906 20:08:08.145432   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.145443   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:08.145451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:08.145514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:08.187308   73230 cri.go:89] found id: ""
	I0906 20:08:08.187344   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.187355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:08.187362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:08.187422   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:08.228782   73230 cri.go:89] found id: ""
	I0906 20:08:08.228815   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.228826   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:08.228833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:08.228918   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:08.269237   73230 cri.go:89] found id: ""
	I0906 20:08:08.269266   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.269276   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:08.269285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:08.269351   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:08.305115   73230 cri.go:89] found id: ""
	I0906 20:08:08.305141   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.305149   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:08.305155   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:08.305206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:08.345442   73230 cri.go:89] found id: ""
	I0906 20:08:08.345472   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.345483   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:08.345494   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:08.345510   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:08.396477   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:08.396518   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:08.410978   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:08.411002   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:08.486220   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:08.486247   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:08.486265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:08.574138   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:08.574190   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:08.339280   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:10.340142   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.156809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.160037   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.192303   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.192456   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.192684   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.117545   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:11.131884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:11.131944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:11.169481   73230 cri.go:89] found id: ""
	I0906 20:08:11.169507   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.169518   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:11.169525   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:11.169590   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:11.211068   73230 cri.go:89] found id: ""
	I0906 20:08:11.211092   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.211100   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:11.211105   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:11.211157   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:11.250526   73230 cri.go:89] found id: ""
	I0906 20:08:11.250560   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.250574   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:11.250580   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:11.250627   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:11.289262   73230 cri.go:89] found id: ""
	I0906 20:08:11.289284   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.289292   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:11.289299   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:11.289346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:11.335427   73230 cri.go:89] found id: ""
	I0906 20:08:11.335456   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.335467   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:11.335475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:11.335535   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:11.375481   73230 cri.go:89] found id: ""
	I0906 20:08:11.375509   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.375518   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:11.375524   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:11.375575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:11.416722   73230 cri.go:89] found id: ""
	I0906 20:08:11.416748   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.416758   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:11.416765   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:11.416830   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:11.452986   73230 cri.go:89] found id: ""
	I0906 20:08:11.453019   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.453030   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:11.453042   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:11.453059   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:11.466435   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:11.466461   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:11.545185   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:11.545212   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:11.545231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:11.627390   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:11.627422   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:11.674071   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:11.674098   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.225887   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:14.242121   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:14.242200   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:14.283024   73230 cri.go:89] found id: ""
	I0906 20:08:14.283055   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.283067   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:14.283074   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:14.283135   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:14.325357   73230 cri.go:89] found id: ""
	I0906 20:08:14.325379   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.325387   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:14.325392   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:14.325455   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:14.362435   73230 cri.go:89] found id: ""
	I0906 20:08:14.362459   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.362467   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:14.362473   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:14.362537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:14.398409   73230 cri.go:89] found id: ""
	I0906 20:08:14.398441   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.398450   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:14.398455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:14.398509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:14.434902   73230 cri.go:89] found id: ""
	I0906 20:08:14.434934   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.434943   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:14.434950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:14.435009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:14.476605   73230 cri.go:89] found id: ""
	I0906 20:08:14.476635   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.476647   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:14.476655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:14.476717   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:14.533656   73230 cri.go:89] found id: ""
	I0906 20:08:14.533681   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.533690   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:14.533696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:14.533753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:14.599661   73230 cri.go:89] found id: ""
	I0906 20:08:14.599685   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.599693   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:14.599702   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:14.599715   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.657680   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:14.657712   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:14.671594   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:14.671624   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:14.747945   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:14.747969   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:14.747979   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:14.829021   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:14.829057   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:12.838805   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:14.839569   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.659405   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:16.156840   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:15.692205   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.693709   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.373569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:17.388910   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:17.388987   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:17.428299   73230 cri.go:89] found id: ""
	I0906 20:08:17.428335   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.428347   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:17.428354   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:17.428419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:17.464660   73230 cri.go:89] found id: ""
	I0906 20:08:17.464685   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.464692   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:17.464697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:17.464758   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:17.500018   73230 cri.go:89] found id: ""
	I0906 20:08:17.500047   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.500059   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:17.500067   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:17.500130   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:17.536345   73230 cri.go:89] found id: ""
	I0906 20:08:17.536375   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.536386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:17.536394   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:17.536456   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:17.574668   73230 cri.go:89] found id: ""
	I0906 20:08:17.574696   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.574707   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:17.574715   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:17.574780   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:17.611630   73230 cri.go:89] found id: ""
	I0906 20:08:17.611653   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.611663   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:17.611669   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:17.611713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:17.647610   73230 cri.go:89] found id: ""
	I0906 20:08:17.647639   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.647649   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:17.647657   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:17.647724   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:17.686204   73230 cri.go:89] found id: ""
	I0906 20:08:17.686233   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.686246   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:17.686260   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:17.686273   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:17.702040   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:17.702069   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:17.775033   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:17.775058   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:17.775074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:17.862319   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:17.862359   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:17.905567   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:17.905604   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:17.339116   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:19.839554   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:21.839622   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:18.157104   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.657604   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.191024   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:22.192687   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.457191   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:20.471413   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:20.471474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:20.533714   73230 cri.go:89] found id: ""
	I0906 20:08:20.533749   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.533765   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:20.533772   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:20.533833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:20.580779   73230 cri.go:89] found id: ""
	I0906 20:08:20.580811   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.580823   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:20.580830   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:20.580902   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:20.619729   73230 cri.go:89] found id: ""
	I0906 20:08:20.619755   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.619763   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:20.619769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:20.619816   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:20.661573   73230 cri.go:89] found id: ""
	I0906 20:08:20.661599   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.661606   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:20.661612   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:20.661664   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:20.709409   73230 cri.go:89] found id: ""
	I0906 20:08:20.709443   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.709455   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:20.709463   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:20.709515   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:20.746743   73230 cri.go:89] found id: ""
	I0906 20:08:20.746783   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.746808   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:20.746816   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:20.746891   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:20.788129   73230 cri.go:89] found id: ""
	I0906 20:08:20.788155   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.788164   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:20.788170   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:20.788217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:20.825115   73230 cri.go:89] found id: ""
	I0906 20:08:20.825139   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.825147   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:20.825156   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:20.825167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:20.880975   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:20.881013   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:20.895027   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:20.895061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:20.972718   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:20.972739   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:20.972754   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:21.053062   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:21.053096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:23.595439   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:23.612354   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:23.612419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:23.654479   73230 cri.go:89] found id: ""
	I0906 20:08:23.654508   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.654519   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:23.654526   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:23.654591   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:23.690061   73230 cri.go:89] found id: ""
	I0906 20:08:23.690092   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.690103   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:23.690112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:23.690173   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:23.726644   73230 cri.go:89] found id: ""
	I0906 20:08:23.726670   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.726678   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:23.726684   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:23.726744   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:23.763348   73230 cri.go:89] found id: ""
	I0906 20:08:23.763378   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.763386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:23.763391   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:23.763452   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:23.799260   73230 cri.go:89] found id: ""
	I0906 20:08:23.799290   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.799299   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:23.799305   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:23.799359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:23.843438   73230 cri.go:89] found id: ""
	I0906 20:08:23.843470   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.843481   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:23.843489   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:23.843558   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:23.879818   73230 cri.go:89] found id: ""
	I0906 20:08:23.879847   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.879856   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:23.879867   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:23.879933   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:23.916182   73230 cri.go:89] found id: ""
	I0906 20:08:23.916207   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.916220   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:23.916229   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:23.916240   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:23.987003   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:23.987022   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:23.987033   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:24.073644   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:24.073684   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:24.118293   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:24.118328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:24.172541   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:24.172582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:23.840441   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.338539   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:23.155661   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:25.155855   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:27.157624   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:24.692350   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.692534   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.687747   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:26.702174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:26.702238   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:26.740064   73230 cri.go:89] found id: ""
	I0906 20:08:26.740093   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.740101   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:26.740108   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:26.740158   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:26.775198   73230 cri.go:89] found id: ""
	I0906 20:08:26.775227   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.775237   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:26.775244   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:26.775303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:26.808850   73230 cri.go:89] found id: ""
	I0906 20:08:26.808892   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.808903   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:26.808915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:26.808974   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:26.842926   73230 cri.go:89] found id: ""
	I0906 20:08:26.842953   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.842964   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:26.842972   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:26.843031   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:26.878621   73230 cri.go:89] found id: ""
	I0906 20:08:26.878649   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.878658   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:26.878664   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:26.878713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:26.921816   73230 cri.go:89] found id: ""
	I0906 20:08:26.921862   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.921875   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:26.921884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:26.921952   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:26.960664   73230 cri.go:89] found id: ""
	I0906 20:08:26.960692   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.960702   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:26.960709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:26.960771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:27.004849   73230 cri.go:89] found id: ""
	I0906 20:08:27.004904   73230 logs.go:276] 0 containers: []
	W0906 20:08:27.004913   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:27.004922   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:27.004934   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:27.056237   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:27.056267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:27.071882   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:27.071904   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:27.143927   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:27.143949   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:27.143961   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:27.223901   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:27.223935   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:29.766615   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:29.780295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:29.780367   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:29.817745   73230 cri.go:89] found id: ""
	I0906 20:08:29.817775   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.817784   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:29.817790   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:29.817852   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:29.855536   73230 cri.go:89] found id: ""
	I0906 20:08:29.855559   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.855567   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:29.855572   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:29.855628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:29.895043   73230 cri.go:89] found id: ""
	I0906 20:08:29.895092   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.895104   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:29.895111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:29.895178   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:29.939225   73230 cri.go:89] found id: ""
	I0906 20:08:29.939248   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.939256   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:29.939262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:29.939331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:29.974166   73230 cri.go:89] found id: ""
	I0906 20:08:29.974190   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.974198   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:29.974203   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:29.974258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:30.009196   73230 cri.go:89] found id: ""
	I0906 20:08:30.009226   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.009237   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:30.009245   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:30.009310   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:30.043939   73230 cri.go:89] found id: ""
	I0906 20:08:30.043962   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.043970   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:30.043976   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:30.044023   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:30.080299   73230 cri.go:89] found id: ""
	I0906 20:08:30.080328   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.080336   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:30.080345   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:30.080356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:30.131034   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:30.131068   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:30.145502   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:30.145536   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:30.219941   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:30.219963   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:30.219978   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:30.307958   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:30.307995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:28.839049   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.338815   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.656748   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.657112   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.192284   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.193181   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.854002   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:32.867937   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:32.867998   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:32.906925   73230 cri.go:89] found id: ""
	I0906 20:08:32.906957   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.906969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:32.906976   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:32.907038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:32.946662   73230 cri.go:89] found id: ""
	I0906 20:08:32.946691   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.946702   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:32.946710   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:32.946771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:32.981908   73230 cri.go:89] found id: ""
	I0906 20:08:32.981936   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.981944   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:32.981950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:32.982001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:33.014902   73230 cri.go:89] found id: ""
	I0906 20:08:33.014930   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.014939   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:33.014945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:33.015055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:33.051265   73230 cri.go:89] found id: ""
	I0906 20:08:33.051290   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.051298   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:33.051310   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:33.051363   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:33.085436   73230 cri.go:89] found id: ""
	I0906 20:08:33.085468   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.085480   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:33.085487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:33.085552   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:33.121483   73230 cri.go:89] found id: ""
	I0906 20:08:33.121509   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.121517   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:33.121523   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:33.121578   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:33.159883   73230 cri.go:89] found id: ""
	I0906 20:08:33.159915   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.159926   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:33.159937   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:33.159953   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:33.174411   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:33.174442   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:33.243656   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:33.243694   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:33.243710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:33.321782   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:33.321823   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:33.363299   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:33.363335   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:33.339645   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.839545   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.650358   72441 pod_ready.go:82] duration metric: took 4m0.000296679s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:32.650386   72441 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:32.650410   72441 pod_ready.go:39] duration metric: took 4m12.042795571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:32.650440   72441 kubeadm.go:597] duration metric: took 4m19.97234293s to restartPrimaryControlPlane
	W0906 20:08:32.650505   72441 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:32.650542   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:33.692877   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:36.192090   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:38.192465   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.916159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:35.929190   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:35.929265   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:35.962853   73230 cri.go:89] found id: ""
	I0906 20:08:35.962890   73230 logs.go:276] 0 containers: []
	W0906 20:08:35.962901   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:35.962909   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:35.962969   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:36.000265   73230 cri.go:89] found id: ""
	I0906 20:08:36.000309   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.000318   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:36.000324   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:36.000374   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:36.042751   73230 cri.go:89] found id: ""
	I0906 20:08:36.042781   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.042792   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:36.042800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:36.042859   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:36.077922   73230 cri.go:89] found id: ""
	I0906 20:08:36.077957   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.077967   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:36.077975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:36.078038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:36.114890   73230 cri.go:89] found id: ""
	I0906 20:08:36.114926   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.114937   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:36.114945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:36.114997   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:36.148058   73230 cri.go:89] found id: ""
	I0906 20:08:36.148089   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.148101   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:36.148108   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:36.148167   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:36.187334   73230 cri.go:89] found id: ""
	I0906 20:08:36.187361   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.187371   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:36.187379   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:36.187498   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:36.221295   73230 cri.go:89] found id: ""
	I0906 20:08:36.221331   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.221342   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:36.221353   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:36.221367   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:36.273489   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:36.273527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:36.287975   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:36.288005   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:36.366914   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:36.366937   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:36.366950   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:36.446582   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:36.446619   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.987075   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:39.001051   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:39.001113   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:39.038064   73230 cri.go:89] found id: ""
	I0906 20:08:39.038093   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.038103   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:39.038110   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:39.038175   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:39.075759   73230 cri.go:89] found id: ""
	I0906 20:08:39.075788   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.075799   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:39.075805   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:39.075866   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:39.113292   73230 cri.go:89] found id: ""
	I0906 20:08:39.113320   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.113331   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:39.113339   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:39.113404   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:39.157236   73230 cri.go:89] found id: ""
	I0906 20:08:39.157269   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.157281   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:39.157289   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:39.157362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:39.195683   73230 cri.go:89] found id: ""
	I0906 20:08:39.195704   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.195712   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:39.195717   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:39.195763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:39.234865   73230 cri.go:89] found id: ""
	I0906 20:08:39.234894   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.234903   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:39.234909   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:39.234961   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:39.269946   73230 cri.go:89] found id: ""
	I0906 20:08:39.269975   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.269983   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:39.269989   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:39.270034   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:39.306184   73230 cri.go:89] found id: ""
	I0906 20:08:39.306214   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.306225   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:39.306235   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:39.306249   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:39.357887   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:39.357920   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:39.371736   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:39.371767   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:39.445674   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:39.445695   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:39.445708   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:39.525283   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:39.525316   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.343370   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.839247   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.691846   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.694807   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.069066   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:42.083229   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:42.083313   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:42.124243   73230 cri.go:89] found id: ""
	I0906 20:08:42.124267   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.124275   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:42.124280   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:42.124330   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:42.162070   73230 cri.go:89] found id: ""
	I0906 20:08:42.162102   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.162113   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:42.162120   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:42.162183   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:42.199161   73230 cri.go:89] found id: ""
	I0906 20:08:42.199191   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.199201   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:42.199208   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:42.199266   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:42.236956   73230 cri.go:89] found id: ""
	I0906 20:08:42.236980   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.236991   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:42.236996   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:42.237068   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:42.272299   73230 cri.go:89] found id: ""
	I0906 20:08:42.272328   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.272336   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:42.272341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:42.272400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:42.310280   73230 cri.go:89] found id: ""
	I0906 20:08:42.310304   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.310312   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:42.310317   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:42.310362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:42.345850   73230 cri.go:89] found id: ""
	I0906 20:08:42.345873   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.345881   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:42.345887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:42.345937   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:42.380785   73230 cri.go:89] found id: ""
	I0906 20:08:42.380812   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.380820   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:42.380830   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:42.380843   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.435803   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:42.435839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:42.450469   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:42.450498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:42.521565   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:42.521587   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:42.521599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:42.595473   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:42.595508   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:45.136985   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:45.150468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:45.150540   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:45.186411   73230 cri.go:89] found id: ""
	I0906 20:08:45.186440   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.186448   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:45.186454   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:45.186521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:45.224463   73230 cri.go:89] found id: ""
	I0906 20:08:45.224495   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.224506   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:45.224513   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:45.224568   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:45.262259   73230 cri.go:89] found id: ""
	I0906 20:08:45.262286   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.262295   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:45.262301   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:45.262357   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:45.299463   73230 cri.go:89] found id: ""
	I0906 20:08:45.299492   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.299501   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:45.299507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:45.299561   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:45.336125   73230 cri.go:89] found id: ""
	I0906 20:08:45.336153   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.336162   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:45.336168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:45.336216   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:45.370397   73230 cri.go:89] found id: ""
	I0906 20:08:45.370427   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.370439   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:45.370448   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:45.370518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:45.406290   73230 cri.go:89] found id: ""
	I0906 20:08:45.406322   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.406333   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:45.406341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:45.406402   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:45.441560   73230 cri.go:89] found id: ""
	I0906 20:08:45.441592   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.441603   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:45.441614   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:45.441627   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.840127   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.349331   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.192059   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:47.691416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.508769   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:45.508811   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:45.523659   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:45.523696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:45.595544   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:45.595567   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:45.595582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:45.676060   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:45.676096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:48.216490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:48.230021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:48.230093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:48.267400   73230 cri.go:89] found id: ""
	I0906 20:08:48.267433   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.267444   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:48.267451   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:48.267519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:48.314694   73230 cri.go:89] found id: ""
	I0906 20:08:48.314722   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.314731   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:48.314739   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:48.314805   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:48.358861   73230 cri.go:89] found id: ""
	I0906 20:08:48.358895   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.358906   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:48.358915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:48.358990   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:48.398374   73230 cri.go:89] found id: ""
	I0906 20:08:48.398400   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.398410   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:48.398416   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:48.398488   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:48.438009   73230 cri.go:89] found id: ""
	I0906 20:08:48.438039   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.438050   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:48.438058   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:48.438115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:48.475970   73230 cri.go:89] found id: ""
	I0906 20:08:48.475998   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.476007   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:48.476013   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:48.476071   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:48.512191   73230 cri.go:89] found id: ""
	I0906 20:08:48.512220   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.512230   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:48.512237   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:48.512299   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:48.547820   73230 cri.go:89] found id: ""
	I0906 20:08:48.547850   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.547861   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:48.547872   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:48.547886   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:48.616962   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:48.616997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:48.631969   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:48.631998   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:48.717025   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:48.717043   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:48.717054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:48.796131   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:48.796167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:47.838558   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.839063   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.839099   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.693239   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:52.191416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.342030   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:51.355761   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:51.355845   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:51.395241   73230 cri.go:89] found id: ""
	I0906 20:08:51.395272   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.395283   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:51.395290   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:51.395350   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:51.433860   73230 cri.go:89] found id: ""
	I0906 20:08:51.433888   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.433897   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:51.433904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:51.433968   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:51.475568   73230 cri.go:89] found id: ""
	I0906 20:08:51.475598   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.475608   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:51.475615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:51.475678   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:51.512305   73230 cri.go:89] found id: ""
	I0906 20:08:51.512329   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.512337   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:51.512342   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:51.512391   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:51.545796   73230 cri.go:89] found id: ""
	I0906 20:08:51.545819   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.545827   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:51.545833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:51.545884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:51.578506   73230 cri.go:89] found id: ""
	I0906 20:08:51.578531   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.578539   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:51.578545   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:51.578609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:51.616571   73230 cri.go:89] found id: ""
	I0906 20:08:51.616596   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.616609   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:51.616615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:51.616660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:51.651542   73230 cri.go:89] found id: ""
	I0906 20:08:51.651566   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.651580   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:51.651588   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:51.651599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:51.705160   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:51.705193   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:51.719450   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:51.719477   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:51.789775   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:51.789796   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:51.789809   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:51.870123   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:51.870158   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.411818   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:54.425759   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:54.425818   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:54.467920   73230 cri.go:89] found id: ""
	I0906 20:08:54.467943   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.467951   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:54.467956   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:54.468008   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:54.508324   73230 cri.go:89] found id: ""
	I0906 20:08:54.508349   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.508357   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:54.508363   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:54.508410   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:54.544753   73230 cri.go:89] found id: ""
	I0906 20:08:54.544780   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.544790   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:54.544797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:54.544884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:54.581407   73230 cri.go:89] found id: ""
	I0906 20:08:54.581436   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.581446   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:54.581453   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:54.581514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:54.618955   73230 cri.go:89] found id: ""
	I0906 20:08:54.618986   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.618998   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:54.619006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:54.619065   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:54.656197   73230 cri.go:89] found id: ""
	I0906 20:08:54.656229   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.656248   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:54.656255   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:54.656316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:54.697499   73230 cri.go:89] found id: ""
	I0906 20:08:54.697536   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.697544   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:54.697549   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:54.697600   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:54.734284   73230 cri.go:89] found id: ""
	I0906 20:08:54.734313   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.734331   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:54.734342   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:54.734356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:54.811079   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:54.811100   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:54.811111   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:54.887309   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:54.887346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.930465   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:54.930499   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:55.000240   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:55.000303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:54.339076   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:54.833352   72867 pod_ready.go:82] duration metric: took 4m0.000854511s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:54.833398   72867 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:54.833423   72867 pod_ready.go:39] duration metric: took 4m14.79685184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:54.833458   72867 kubeadm.go:597] duration metric: took 4m22.254900492s to restartPrimaryControlPlane
	W0906 20:08:54.833525   72867 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:54.833576   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:54.192038   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:56.192120   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:58.193505   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:57.530956   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:57.544056   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:57.544136   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:57.584492   73230 cri.go:89] found id: ""
	I0906 20:08:57.584519   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.584528   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:57.584534   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:57.584585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:57.620220   73230 cri.go:89] found id: ""
	I0906 20:08:57.620250   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.620259   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:57.620265   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:57.620321   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:57.655245   73230 cri.go:89] found id: ""
	I0906 20:08:57.655268   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.655283   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:57.655288   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:57.655346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:57.690439   73230 cri.go:89] found id: ""
	I0906 20:08:57.690470   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.690481   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:57.690487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:57.690551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:57.728179   73230 cri.go:89] found id: ""
	I0906 20:08:57.728206   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.728214   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:57.728221   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:57.728270   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:57.763723   73230 cri.go:89] found id: ""
	I0906 20:08:57.763752   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.763761   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:57.763767   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:57.763825   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:57.799836   73230 cri.go:89] found id: ""
	I0906 20:08:57.799861   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.799869   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:57.799876   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:57.799922   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:57.834618   73230 cri.go:89] found id: ""
	I0906 20:08:57.834644   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.834651   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:57.834660   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:57.834671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:57.887297   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:57.887331   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:57.901690   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:57.901717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:57.969179   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:57.969209   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:57.969223   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:58.052527   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:58.052642   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:58.870446   72441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.219876198s)
	I0906 20:08:58.870530   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:08:58.888197   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:08:58.899185   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:08:58.909740   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:08:58.909762   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:08:58.909806   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:08:58.919589   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:08:58.919646   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:08:58.930386   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:08:58.940542   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:08:58.940621   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:08:58.951673   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.963471   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:08:58.963545   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.974638   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:08:58.984780   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:08:58.984843   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:08:58.995803   72441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:08:59.046470   72441 kubeadm.go:310] W0906 20:08:59.003226    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.047297   72441 kubeadm.go:310] W0906 20:08:59.004193    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.166500   72441 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:00.691499   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:02.692107   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:00.593665   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:00.608325   73230 kubeadm.go:597] duration metric: took 4m4.153407014s to restartPrimaryControlPlane
	W0906 20:09:00.608399   73230 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:00.608428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:05.878028   73230 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.269561172s)
	I0906 20:09:05.878112   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:05.893351   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:05.904668   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:05.915560   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:05.915583   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:05.915633   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:09:05.926566   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:05.926625   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:05.937104   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:09:05.946406   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:05.946467   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:05.956203   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.965691   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:05.965751   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.976210   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:09:05.986104   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:05.986174   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:05.996282   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:06.068412   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:09:06.068507   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:06.213882   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:06.214044   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:06.214191   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:09:06.406793   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.067295   72441 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:07.067370   72441 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:07.067449   72441 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:07.067595   72441 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:07.067737   72441 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:07.067795   72441 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.069381   72441 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:07.069477   72441 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:07.069559   72441 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:07.069652   72441 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:07.069733   72441 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:07.069825   72441 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:07.069898   72441 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:07.069981   72441 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:07.070068   72441 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:07.070178   72441 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:07.070279   72441 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:07.070349   72441 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:07.070424   72441 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:07.070494   72441 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:07.070592   72441 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:07.070669   72441 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.070755   72441 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.070828   72441 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.070916   72441 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.070972   72441 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:07.072214   72441 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.072317   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.072399   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.072487   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.072613   72441 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.072685   72441 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.072719   72441 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.072837   72441 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:07.072977   72441 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:07.073063   72441 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.515053ms
	I0906 20:09:07.073178   72441 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:07.073257   72441 kubeadm.go:310] [api-check] The API server is healthy after 5.001748851s
	I0906 20:09:07.073410   72441 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:07.073558   72441 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:07.073650   72441 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:07.073860   72441 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-458066 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:07.073936   72441 kubeadm.go:310] [bootstrap-token] Using token: 3t2lf6.w44vkc4kfppuo2gp
	I0906 20:09:07.075394   72441 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:07.075524   72441 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:07.075621   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:07.075738   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:07.075905   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:07.076003   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:07.076094   72441 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:07.076222   72441 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:07.076397   72441 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:07.076486   72441 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:07.076502   72441 kubeadm.go:310] 
	I0906 20:09:07.076579   72441 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:07.076594   72441 kubeadm.go:310] 
	I0906 20:09:07.076687   72441 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:07.076698   72441 kubeadm.go:310] 
	I0906 20:09:07.076727   72441 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:07.076810   72441 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:07.076893   72441 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:07.076900   72441 kubeadm.go:310] 
	I0906 20:09:07.077016   72441 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:07.077029   72441 kubeadm.go:310] 
	I0906 20:09:07.077090   72441 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:07.077105   72441 kubeadm.go:310] 
	I0906 20:09:07.077172   72441 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:07.077273   72441 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:07.077368   72441 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:07.077377   72441 kubeadm.go:310] 
	I0906 20:09:07.077496   72441 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:07.077589   72441 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:07.077600   72441 kubeadm.go:310] 
	I0906 20:09:07.077680   72441 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.077767   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:07.077807   72441 kubeadm.go:310] 	--control-plane 
	I0906 20:09:07.077817   72441 kubeadm.go:310] 
	I0906 20:09:07.077927   72441 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:07.077946   72441 kubeadm.go:310] 
	I0906 20:09:07.078053   72441 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.078191   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:07.078206   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:09:07.078216   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:07.079782   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:07.080965   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:07.092500   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:07.112546   72441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:07.112618   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:07.112648   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-458066 minikube.k8s.io/updated_at=2024_09_06T20_09_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=embed-certs-458066 minikube.k8s.io/primary=true
	I0906 20:09:07.343125   72441 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:07.343284   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:06.408933   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:06.409043   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:06.409126   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:06.409242   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:06.409351   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:06.409445   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:06.409559   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:06.409666   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:06.409758   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:06.409870   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:06.409964   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:06.410010   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:06.410101   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:06.721268   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:06.888472   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.414908   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.505887   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.525704   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.525835   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.525913   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.699971   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:04.692422   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.193312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.701970   73230 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.702095   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.708470   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.710216   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.711016   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.714706   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:09:07.844097   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.344174   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.843884   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.343591   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.843748   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.344148   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.844002   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.343424   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.444023   72441 kubeadm.go:1113] duration metric: took 4.331471016s to wait for elevateKubeSystemPrivileges
	I0906 20:09:11.444067   72441 kubeadm.go:394] duration metric: took 4m58.815096997s to StartCluster
	I0906 20:09:11.444093   72441 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.444186   72441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:11.446093   72441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.446360   72441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:11.446430   72441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:11.446521   72441 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-458066"
	I0906 20:09:11.446542   72441 addons.go:69] Setting default-storageclass=true in profile "embed-certs-458066"
	I0906 20:09:11.446560   72441 addons.go:69] Setting metrics-server=true in profile "embed-certs-458066"
	I0906 20:09:11.446609   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:11.446615   72441 addons.go:234] Setting addon metrics-server=true in "embed-certs-458066"
	W0906 20:09:11.446663   72441 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:11.446694   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.446576   72441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-458066"
	I0906 20:09:11.446570   72441 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-458066"
	W0906 20:09:11.446779   72441 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:11.446810   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.447077   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447112   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447170   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447211   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447350   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447426   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447879   72441 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:11.449461   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:11.463673   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I0906 20:09:11.463676   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0906 20:09:11.464129   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464231   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464669   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464691   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.464675   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464745   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.465097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465139   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465608   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465634   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.465731   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465778   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.466622   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0906 20:09:11.466967   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.467351   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.467366   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.467622   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.467759   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.471093   72441 addons.go:234] Setting addon default-storageclass=true in "embed-certs-458066"
	W0906 20:09:11.471115   72441 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:11.471145   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.471524   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.471543   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.488980   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0906 20:09:11.489014   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0906 20:09:11.489399   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0906 20:09:11.489465   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489517   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489908   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.490116   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490134   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490144   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490158   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490411   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490427   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490481   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490872   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490886   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.491406   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.491500   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.491520   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.491619   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.493485   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.493901   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.495272   72441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:11.495274   72441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:11.496553   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:11.496575   72441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:11.496597   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.496647   72441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.496667   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:11.496684   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.500389   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500395   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500469   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.500786   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500808   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500952   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501105   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.501145   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501259   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501305   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.501389   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501501   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.510188   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0906 20:09:11.510617   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.511142   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.511169   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.511539   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.511754   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.513207   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.513439   72441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.513455   72441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:11.513474   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.516791   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517292   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.517323   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517563   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.517898   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.518085   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.518261   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.669057   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:11.705086   72441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731651   72441 node_ready.go:49] node "embed-certs-458066" has status "Ready":"True"
	I0906 20:09:11.731679   72441 node_ready.go:38] duration metric: took 26.546983ms for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731691   72441 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:11.740680   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:11.767740   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:11.767760   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:11.771571   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.804408   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:11.804435   72441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:11.844160   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.856217   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:11.856240   72441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:11.899134   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:13.159543   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.315345353s)
	I0906 20:09:13.159546   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387931315s)
	I0906 20:09:13.159639   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159660   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159601   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159711   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.159985   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.159997   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160008   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160018   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160080   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160095   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160104   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160115   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160265   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160289   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160401   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160417   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185478   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.185512   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.185914   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.185934   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185949   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.228561   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.329382232s)
	I0906 20:09:13.228621   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.228636   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228924   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.228978   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.228991   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.229001   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.229229   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.229258   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.229270   72441 addons.go:475] Verifying addon metrics-server=true in "embed-certs-458066"
	I0906 20:09:13.230827   72441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:09.691281   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:11.692514   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:13.231988   72441 addons.go:510] duration metric: took 1.785558897s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:13.750043   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.247314   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.748039   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:16.748064   72441 pod_ready.go:82] duration metric: took 5.007352361s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:16.748073   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:14.192167   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.691856   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:18.754580   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:19.254643   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:19.254669   72441 pod_ready.go:82] duration metric: took 2.506589666s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:19.254680   72441 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762162   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.762188   72441 pod_ready.go:82] duration metric: took 1.507501384s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762202   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770835   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.770860   72441 pod_ready.go:82] duration metric: took 8.65029ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770872   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779692   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.779713   72441 pod_ready.go:82] duration metric: took 8.832607ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779725   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786119   72441 pod_ready.go:93] pod "kube-proxy-rzx2f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.786146   72441 pod_ready.go:82] duration metric: took 6.414063ms for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786158   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852593   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.852630   72441 pod_ready.go:82] duration metric: took 66.461213ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852642   72441 pod_ready.go:39] duration metric: took 9.120937234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:20.852663   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:20.852729   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:20.871881   72441 api_server.go:72] duration metric: took 9.425481233s to wait for apiserver process to appear ...
	I0906 20:09:20.871911   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:20.871927   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:09:20.876997   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:09:20.878290   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:20.878314   72441 api_server.go:131] duration metric: took 6.396943ms to wait for apiserver health ...
	I0906 20:09:20.878324   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:21.057265   72441 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:21.057303   72441 system_pods.go:61] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.057312   72441 system_pods.go:61] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.057319   72441 system_pods.go:61] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.057326   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.057332   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.057338   72441 system_pods.go:61] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.057345   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.057356   72441 system_pods.go:61] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.057367   72441 system_pods.go:61] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.057381   72441 system_pods.go:74] duration metric: took 179.050809ms to wait for pod list to return data ...
	I0906 20:09:21.057394   72441 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:21.252816   72441 default_sa.go:45] found service account: "default"
	I0906 20:09:21.252842   72441 default_sa.go:55] duration metric: took 195.436403ms for default service account to be created ...
	I0906 20:09:21.252851   72441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:21.455714   72441 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:21.455742   72441 system_pods.go:89] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.455748   72441 system_pods.go:89] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.455752   72441 system_pods.go:89] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.455755   72441 system_pods.go:89] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.455759   72441 system_pods.go:89] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.455763   72441 system_pods.go:89] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.455766   72441 system_pods.go:89] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.455772   72441 system_pods.go:89] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.455776   72441 system_pods.go:89] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.455784   72441 system_pods.go:126] duration metric: took 202.909491ms to wait for k8s-apps to be running ...
	I0906 20:09:21.455791   72441 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:21.455832   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.474124   72441 system_svc.go:56] duration metric: took 18.325386ms WaitForService to wait for kubelet
	I0906 20:09:21.474150   72441 kubeadm.go:582] duration metric: took 10.027757317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:21.474172   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:21.653674   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:21.653697   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:21.653708   72441 node_conditions.go:105] duration metric: took 179.531797ms to run NodePressure ...
	I0906 20:09:21.653718   72441 start.go:241] waiting for startup goroutines ...
	I0906 20:09:21.653727   72441 start.go:246] waiting for cluster config update ...
	I0906 20:09:21.653740   72441 start.go:255] writing updated cluster config ...
	I0906 20:09:21.654014   72441 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:21.703909   72441 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:21.705502   72441 out.go:177] * Done! kubectl is now configured to use "embed-certs-458066" cluster and "default" namespace by default
	I0906 20:09:21.102986   72867 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.269383553s)
	I0906 20:09:21.103094   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.118935   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:21.129099   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:21.139304   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:21.139326   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:21.139374   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:09:21.149234   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:21.149289   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:21.160067   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:09:21.169584   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:21.169664   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:21.179885   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.190994   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:21.191062   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.201649   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:09:21.211165   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:21.211223   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:21.220998   72867 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:21.269780   72867 kubeadm.go:310] W0906 20:09:21.240800    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.270353   72867 kubeadm.go:310] W0906 20:09:21.241533    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.389445   72867 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:18.692475   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:21.193075   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:23.697031   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:26.191208   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:28.192166   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:30.493468   72867 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:30.493543   72867 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:30.493620   72867 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:30.493751   72867 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:30.493891   72867 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:30.493971   72867 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:30.495375   72867 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:30.495467   72867 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:30.495537   72867 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:30.495828   72867 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:30.495913   72867 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:30.495977   72867 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:30.496024   72867 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:30.496112   72867 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:30.496207   72867 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:30.496308   72867 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:30.496400   72867 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:30.496452   72867 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:30.496519   72867 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:30.496601   72867 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:30.496690   72867 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:30.496774   72867 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:30.496887   72867 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:30.496946   72867 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:30.497018   72867 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:30.497074   72867 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:30.498387   72867 out.go:235]   - Booting up control plane ...
	I0906 20:09:30.498472   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:30.498550   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:30.498616   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:30.498715   72867 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:30.498786   72867 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:30.498821   72867 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:30.498969   72867 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:30.499076   72867 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:30.499126   72867 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.325552ms
	I0906 20:09:30.499189   72867 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:30.499269   72867 kubeadm.go:310] [api-check] The API server is healthy after 5.002261512s
	I0906 20:09:30.499393   72867 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:30.499507   72867 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:30.499586   72867 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:30.499818   72867 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:30.499915   72867 kubeadm.go:310] [bootstrap-token] Using token: 6yha4r.f9kcjkhkq2u0pp1e
	I0906 20:09:30.501217   72867 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:30.501333   72867 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:30.501438   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:30.501630   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:30.501749   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:30.501837   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:30.501904   72867 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:30.501996   72867 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:30.502032   72867 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:30.502085   72867 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:30.502093   72867 kubeadm.go:310] 
	I0906 20:09:30.502153   72867 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:30.502166   72867 kubeadm.go:310] 
	I0906 20:09:30.502242   72867 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:30.502257   72867 kubeadm.go:310] 
	I0906 20:09:30.502290   72867 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:30.502358   72867 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:30.502425   72867 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:30.502433   72867 kubeadm.go:310] 
	I0906 20:09:30.502486   72867 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:30.502494   72867 kubeadm.go:310] 
	I0906 20:09:30.502529   72867 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:30.502536   72867 kubeadm.go:310] 
	I0906 20:09:30.502575   72867 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:30.502633   72867 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:30.502706   72867 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:30.502720   72867 kubeadm.go:310] 
	I0906 20:09:30.502791   72867 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:30.502882   72867 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:30.502893   72867 kubeadm.go:310] 
	I0906 20:09:30.502982   72867 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503099   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:30.503120   72867 kubeadm.go:310] 	--control-plane 
	I0906 20:09:30.503125   72867 kubeadm.go:310] 
	I0906 20:09:30.503240   72867 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:30.503247   72867 kubeadm.go:310] 
	I0906 20:09:30.503312   72867 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503406   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:30.503416   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:09:30.503424   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:30.504880   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:30.505997   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:30.517864   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:30.539641   72867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:30.539731   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653828 minikube.k8s.io/updated_at=2024_09_06T20_09_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=default-k8s-diff-port-653828 minikube.k8s.io/primary=true
	I0906 20:09:30.539732   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.576812   72867 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:30.742163   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.242299   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.742502   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.192201   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.691488   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.242418   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:32.742424   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.242317   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.742587   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.242563   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.342481   72867 kubeadm.go:1113] duration metric: took 3.802829263s to wait for elevateKubeSystemPrivileges
	I0906 20:09:34.342520   72867 kubeadm.go:394] duration metric: took 5m1.826839653s to StartCluster
	I0906 20:09:34.342542   72867 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.342640   72867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:34.345048   72867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.345461   72867 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:34.345576   72867 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:34.345655   72867 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345691   72867 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653828"
	I0906 20:09:34.345696   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:34.345699   72867 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345712   72867 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345737   72867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653828"
	W0906 20:09:34.345703   72867 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:34.345752   72867 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.345762   72867 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:34.345779   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.345795   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.346102   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346136   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346174   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346195   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346231   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346201   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.347895   72867 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:34.349535   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:34.363021   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0906 20:09:34.363492   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.364037   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.364062   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.364463   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.365147   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.365186   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.365991   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
	I0906 20:09:34.366024   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0906 20:09:34.366472   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366512   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366953   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.366970   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367086   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.367113   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367494   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367642   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367988   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.368011   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.368282   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.375406   72867 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.375432   72867 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:34.375460   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.375825   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.375858   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.382554   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0906 20:09:34.383102   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.383600   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.383616   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.383938   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.384214   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.385829   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.387409   72867 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:34.388348   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:34.388366   72867 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:34.388381   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.392542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.392813   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.392828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.393018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.393068   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0906 20:09:34.393374   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.393439   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.393550   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.393686   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.394089   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.394116   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.394464   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.394651   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.396559   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.396712   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0906 20:09:34.397142   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.397646   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.397669   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.397929   72867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:34.398023   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.398468   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.398511   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.399007   72867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.399024   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:34.399043   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.405024   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405057   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.405081   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405287   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.405479   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.405634   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.405752   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.414779   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0906 20:09:34.415230   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.415662   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.415679   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.415993   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.416151   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.417818   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.418015   72867 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.418028   72867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:34.418045   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.421303   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421379   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.421399   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421645   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.421815   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.421979   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.422096   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.582923   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:34.600692   72867 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617429   72867 node_ready.go:49] node "default-k8s-diff-port-653828" has status "Ready":"True"
	I0906 20:09:34.617454   72867 node_ready.go:38] duration metric: took 16.723446ms for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617465   72867 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:34.632501   72867 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:34.679561   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.682999   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.746380   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:34.746406   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:34.876650   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:34.876680   72867 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:34.935388   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:34.935415   72867 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:35.092289   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:35.709257   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02965114s)
	I0906 20:09:35.709297   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026263795s)
	I0906 20:09:35.709352   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709373   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709319   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709398   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709810   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.709911   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709898   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709926   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.709954   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709962   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709876   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710029   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710047   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.710065   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.710226   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710238   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710636   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.710665   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710681   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754431   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.754458   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.754765   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.754781   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754821   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.181191   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:36.181219   72867 pod_ready.go:82] duration metric: took 1.54868366s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.181233   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.351617   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.259284594s)
	I0906 20:09:36.351684   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.351701   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.351992   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352078   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352100   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.352111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.352055   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352402   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352914   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352934   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352945   72867 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653828"
	I0906 20:09:36.354972   72867 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:36.356127   72867 addons.go:510] duration metric: took 2.010554769s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:34.695700   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:37.193366   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:38.187115   72867 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:39.188966   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:39.188998   72867 pod_ready.go:82] duration metric: took 3.007757042s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:39.189012   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:41.196228   72867 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.206614   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.206636   72867 pod_ready.go:82] duration metric: took 3.017616218s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.206647   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212140   72867 pod_ready.go:93] pod "kube-proxy-7846f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.212165   72867 pod_ready.go:82] duration metric: took 5.512697ms for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212174   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217505   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.217527   72867 pod_ready.go:82] duration metric: took 5.346748ms for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217534   72867 pod_ready.go:39] duration metric: took 7.600058293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:42.217549   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:42.217600   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:42.235961   72867 api_server.go:72] duration metric: took 7.890460166s to wait for apiserver process to appear ...
	I0906 20:09:42.235987   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:42.236003   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:09:42.240924   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:09:42.241889   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:42.241912   72867 api_server.go:131] duration metric: took 5.919055ms to wait for apiserver health ...
	I0906 20:09:42.241922   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:42.247793   72867 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:42.247825   72867 system_pods.go:61] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.247833   72867 system_pods.go:61] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.247839   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.247845   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.247852   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.247857   72867 system_pods.go:61] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.247861   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.247866   72867 system_pods.go:61] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.247873   72867 system_pods.go:61] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.247883   72867 system_pods.go:74] duration metric: took 5.95413ms to wait for pod list to return data ...
	I0906 20:09:42.247893   72867 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:42.251260   72867 default_sa.go:45] found service account: "default"
	I0906 20:09:42.251277   72867 default_sa.go:55] duration metric: took 3.3795ms for default service account to be created ...
	I0906 20:09:42.251284   72867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:42.256204   72867 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:42.256228   72867 system_pods.go:89] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.256233   72867 system_pods.go:89] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.256237   72867 system_pods.go:89] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.256241   72867 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.256245   72867 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.256249   72867 system_pods.go:89] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.256252   72867 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.256258   72867 system_pods.go:89] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.256261   72867 system_pods.go:89] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.256270   72867 system_pods.go:126] duration metric: took 4.981383ms to wait for k8s-apps to be running ...
	I0906 20:09:42.256278   72867 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:42.256323   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:42.272016   72867 system_svc.go:56] duration metric: took 15.727796ms WaitForService to wait for kubelet
	I0906 20:09:42.272050   72867 kubeadm.go:582] duration metric: took 7.926551396s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:42.272081   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:42.275486   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:42.275516   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:42.275527   72867 node_conditions.go:105] duration metric: took 3.439966ms to run NodePressure ...
	I0906 20:09:42.275540   72867 start.go:241] waiting for startup goroutines ...
	I0906 20:09:42.275548   72867 start.go:246] waiting for cluster config update ...
	I0906 20:09:42.275561   72867 start.go:255] writing updated cluster config ...
	I0906 20:09:42.275823   72867 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:42.326049   72867 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:42.328034   72867 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653828" cluster and "default" namespace by default
	I0906 20:09:39.692393   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.192176   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:44.691934   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:45.185317   72322 pod_ready.go:82] duration metric: took 4m0.000138495s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	E0906 20:09:45.185352   72322 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:09:45.185371   72322 pod_ready.go:39] duration metric: took 4m12.222584677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:45.185403   72322 kubeadm.go:597] duration metric: took 4m20.152442555s to restartPrimaryControlPlane
	W0906 20:09:45.185466   72322 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:45.185496   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:47.714239   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:09:47.714464   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:47.714711   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:09:52.715187   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:52.715391   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:02.716155   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:02.716424   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:11.446625   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261097398s)
	I0906 20:10:11.446717   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:11.472899   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:10:11.492643   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:10:11.509855   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:10:11.509878   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:10:11.509933   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:10:11.523039   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:10:11.523099   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:10:11.540484   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:10:11.560246   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:10:11.560323   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:10:11.585105   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.596067   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:10:11.596138   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.607049   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:10:11.616982   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:10:11.617058   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:10:11.627880   72322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:10:11.672079   72322 kubeadm.go:310] W0906 20:10:11.645236    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.672935   72322 kubeadm.go:310] W0906 20:10:11.646151    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.789722   72322 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:10:20.270339   72322 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:10:20.270450   72322 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:10:20.270551   72322 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:10:20.270697   72322 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:10:20.270837   72322 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:10:20.270932   72322 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:10:20.272324   72322 out.go:235]   - Generating certificates and keys ...
	I0906 20:10:20.272437   72322 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:10:20.272530   72322 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:10:20.272634   72322 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:10:20.272732   72322 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:10:20.272842   72322 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:10:20.272950   72322 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:10:20.273051   72322 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:10:20.273135   72322 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:10:20.273272   72322 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:10:20.273361   72322 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:10:20.273400   72322 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:10:20.273456   72322 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:10:20.273517   72322 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:10:20.273571   72322 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:10:20.273625   72322 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:10:20.273682   72322 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:10:20.273731   72322 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:10:20.273801   72322 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:10:20.273856   72322 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:10:20.275359   72322 out.go:235]   - Booting up control plane ...
	I0906 20:10:20.275466   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:10:20.275539   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:10:20.275595   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:10:20.275692   72322 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:10:20.275774   72322 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:10:20.275812   72322 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:10:20.275917   72322 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:10:20.276005   72322 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:10:20.276063   72322 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001365031s
	I0906 20:10:20.276127   72322 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:10:20.276189   72322 kubeadm.go:310] [api-check] The API server is healthy after 5.002810387s
	I0906 20:10:20.276275   72322 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:10:20.276410   72322 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:10:20.276480   72322 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:10:20.276639   72322 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-504385 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:10:20.276690   72322 kubeadm.go:310] [bootstrap-token] Using token: fv12w2.cc6vcthx5yn6r6ru
	I0906 20:10:20.277786   72322 out.go:235]   - Configuring RBAC rules ...
	I0906 20:10:20.277872   72322 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:10:20.277941   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:10:20.278082   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:10:20.278231   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:10:20.278351   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:10:20.278426   72322 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:10:20.278541   72322 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:10:20.278614   72322 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:10:20.278692   72322 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:10:20.278700   72322 kubeadm.go:310] 
	I0906 20:10:20.278780   72322 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:10:20.278790   72322 kubeadm.go:310] 
	I0906 20:10:20.278880   72322 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:10:20.278889   72322 kubeadm.go:310] 
	I0906 20:10:20.278932   72322 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:10:20.279023   72322 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:10:20.279079   72322 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:10:20.279086   72322 kubeadm.go:310] 
	I0906 20:10:20.279141   72322 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:10:20.279148   72322 kubeadm.go:310] 
	I0906 20:10:20.279186   72322 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:10:20.279195   72322 kubeadm.go:310] 
	I0906 20:10:20.279291   72322 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:10:20.279420   72322 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:10:20.279524   72322 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:10:20.279535   72322 kubeadm.go:310] 
	I0906 20:10:20.279647   72322 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:10:20.279756   72322 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:10:20.279767   72322 kubeadm.go:310] 
	I0906 20:10:20.279896   72322 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280043   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:10:20.280080   72322 kubeadm.go:310] 	--control-plane 
	I0906 20:10:20.280090   72322 kubeadm.go:310] 
	I0906 20:10:20.280230   72322 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:10:20.280258   72322 kubeadm.go:310] 
	I0906 20:10:20.280365   72322 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280514   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:10:20.280532   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:10:20.280541   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:10:20.282066   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:10:20.283228   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:10:20.294745   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:10:20.317015   72322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-504385 minikube.k8s.io/updated_at=2024_09_06T20_10_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=no-preload-504385 minikube.k8s.io/primary=true
	I0906 20:10:20.528654   72322 ops.go:34] apiserver oom_adj: -16
	I0906 20:10:20.528681   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.029394   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.528922   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.029667   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.528814   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.029163   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.529709   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.029277   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.529466   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.668636   72322 kubeadm.go:1113] duration metric: took 4.351557657s to wait for elevateKubeSystemPrivileges
	I0906 20:10:24.668669   72322 kubeadm.go:394] duration metric: took 4m59.692142044s to StartCluster
	I0906 20:10:24.668690   72322 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.668775   72322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:10:24.670483   72322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.670765   72322 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:10:24.670874   72322 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:10:24.670975   72322 addons.go:69] Setting storage-provisioner=true in profile "no-preload-504385"
	I0906 20:10:24.670990   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:10:24.671015   72322 addons.go:234] Setting addon storage-provisioner=true in "no-preload-504385"
	W0906 20:10:24.671027   72322 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:10:24.670988   72322 addons.go:69] Setting default-storageclass=true in profile "no-preload-504385"
	I0906 20:10:24.671020   72322 addons.go:69] Setting metrics-server=true in profile "no-preload-504385"
	I0906 20:10:24.671053   72322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-504385"
	I0906 20:10:24.671069   72322 addons.go:234] Setting addon metrics-server=true in "no-preload-504385"
	I0906 20:10:24.671057   72322 host.go:66] Checking if "no-preload-504385" exists ...
	W0906 20:10:24.671080   72322 addons.go:243] addon metrics-server should already be in state true
	I0906 20:10:24.671112   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.671387   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671413   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671433   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671462   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671476   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671509   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.672599   72322 out.go:177] * Verifying Kubernetes components...
	I0906 20:10:24.674189   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:10:24.688494   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0906 20:10:24.689082   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.689564   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.689586   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.690020   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.690242   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.691753   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0906 20:10:24.691758   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0906 20:10:24.692223   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692314   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692744   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692761   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.692892   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692912   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.693162   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693498   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693821   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.693851   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694035   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694067   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694118   72322 addons.go:234] Setting addon default-storageclass=true in "no-preload-504385"
	W0906 20:10:24.694133   72322 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:10:24.694159   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.694503   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694533   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.710695   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0906 20:10:24.712123   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.712820   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.712844   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.713265   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.713488   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.714238   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0906 20:10:24.714448   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0906 20:10:24.714584   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.714801   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.715454   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715472   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715517   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.715631   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715643   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715961   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716468   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716527   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.717120   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.717170   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.717534   72322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:10:24.718838   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.719392   72322 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:24.719413   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:10:24.719435   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.720748   72322 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:10:22.717567   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:22.717827   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:24.722045   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:10:24.722066   72322 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:10:24.722084   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.722722   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723383   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.723408   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723545   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.723788   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.723970   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.724133   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.725538   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.725987   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.726006   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.726137   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.726317   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.726499   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.726629   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.734236   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0906 20:10:24.734597   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.735057   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.735069   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.735479   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.735612   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.737446   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.737630   72322 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:24.737647   72322 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:10:24.737658   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.740629   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741040   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.741063   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741251   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.741418   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.741530   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.741659   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.903190   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:10:24.944044   72322 node_ready.go:35] waiting up to 6m0s for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960395   72322 node_ready.go:49] node "no-preload-504385" has status "Ready":"True"
	I0906 20:10:24.960436   72322 node_ready.go:38] duration metric: took 16.357022ms for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960453   72322 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:24.981153   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:25.103072   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:25.113814   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:10:25.113843   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:10:25.123206   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:25.209178   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:10:25.209208   72322 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:10:25.255577   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.255604   72322 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:10:25.297179   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.336592   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336615   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.336915   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.336930   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.336938   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336945   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.337164   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.337178   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.350330   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.350356   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.350630   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.350648   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850349   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850377   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850688   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.850707   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850717   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850725   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850974   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.851012   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.033886   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.033918   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034215   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034221   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034241   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034250   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.034258   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034525   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034533   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034579   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034593   72322 addons.go:475] Verifying addon metrics-server=true in "no-preload-504385"
	I0906 20:10:26.036358   72322 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0906 20:10:26.037927   72322 addons.go:510] duration metric: took 1.367055829s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0906 20:10:26.989945   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:28.987386   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:28.987407   72322 pod_ready.go:82] duration metric: took 4.006228588s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:28.987419   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:30.994020   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:32.999308   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:32.999332   72322 pod_ready.go:82] duration metric: took 4.01190401s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:32.999344   72322 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005872   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.005898   72322 pod_ready.go:82] duration metric: took 1.006546878s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005908   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010279   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.010306   72322 pod_ready.go:82] duration metric: took 4.391154ms for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010315   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014331   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.014346   72322 pod_ready.go:82] duration metric: took 4.025331ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014354   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018361   72322 pod_ready.go:93] pod "kube-proxy-48s2x" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.018378   72322 pod_ready.go:82] duration metric: took 4.018525ms for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018386   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191606   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.191630   72322 pod_ready.go:82] duration metric: took 173.23777ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191638   72322 pod_ready.go:39] duration metric: took 9.231173272s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:34.191652   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:10:34.191738   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:10:34.207858   72322 api_server.go:72] duration metric: took 9.537052258s to wait for apiserver process to appear ...
	I0906 20:10:34.207883   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:10:34.207904   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:10:34.214477   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:10:34.216178   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:10:34.216211   72322 api_server.go:131] duration metric: took 8.319856ms to wait for apiserver health ...
	I0906 20:10:34.216221   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:10:34.396409   72322 system_pods.go:59] 9 kube-system pods found
	I0906 20:10:34.396443   72322 system_pods.go:61] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.396451   72322 system_pods.go:61] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.396456   72322 system_pods.go:61] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.396461   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.396468   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.396472   72322 system_pods.go:61] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.396477   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.396487   72322 system_pods.go:61] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.396502   72322 system_pods.go:61] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.396514   72322 system_pods.go:74] duration metric: took 180.284785ms to wait for pod list to return data ...
	I0906 20:10:34.396526   72322 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:10:34.592160   72322 default_sa.go:45] found service account: "default"
	I0906 20:10:34.592186   72322 default_sa.go:55] duration metric: took 195.651674ms for default service account to be created ...
	I0906 20:10:34.592197   72322 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:10:34.795179   72322 system_pods.go:86] 9 kube-system pods found
	I0906 20:10:34.795210   72322 system_pods.go:89] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.795217   72322 system_pods.go:89] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.795221   72322 system_pods.go:89] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.795224   72322 system_pods.go:89] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.795228   72322 system_pods.go:89] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.795232   72322 system_pods.go:89] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.795238   72322 system_pods.go:89] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.795244   72322 system_pods.go:89] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.795249   72322 system_pods.go:89] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.795258   72322 system_pods.go:126] duration metric: took 203.05524ms to wait for k8s-apps to be running ...
	I0906 20:10:34.795270   72322 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:10:34.795328   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:34.810406   72322 system_svc.go:56] duration metric: took 15.127486ms WaitForService to wait for kubelet
	I0906 20:10:34.810437   72322 kubeadm.go:582] duration metric: took 10.13963577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:10:34.810461   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:10:34.993045   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:10:34.993077   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:10:34.993092   72322 node_conditions.go:105] duration metric: took 182.626456ms to run NodePressure ...
	I0906 20:10:34.993105   72322 start.go:241] waiting for startup goroutines ...
	I0906 20:10:34.993112   72322 start.go:246] waiting for cluster config update ...
	I0906 20:10:34.993122   72322 start.go:255] writing updated cluster config ...
	I0906 20:10:34.993401   72322 ssh_runner.go:195] Run: rm -f paused
	I0906 20:10:35.043039   72322 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:10:35.045782   72322 out.go:177] * Done! kubectl is now configured to use "no-preload-504385" cluster and "default" namespace by default
	I0906 20:11:02.719781   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:02.720062   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:02.720077   73230 kubeadm.go:310] 
	I0906 20:11:02.720125   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:11:02.720177   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:11:02.720189   73230 kubeadm.go:310] 
	I0906 20:11:02.720246   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:11:02.720290   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:11:02.720443   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:11:02.720469   73230 kubeadm.go:310] 
	I0906 20:11:02.720593   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:11:02.720665   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:11:02.720722   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:11:02.720746   73230 kubeadm.go:310] 
	I0906 20:11:02.720900   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:11:02.721018   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:11:02.721028   73230 kubeadm.go:310] 
	I0906 20:11:02.721180   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:11:02.721311   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:11:02.721405   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:11:02.721500   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:11:02.721512   73230 kubeadm.go:310] 
	I0906 20:11:02.722088   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:11:02.722199   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:11:02.722310   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0906 20:11:02.722419   73230 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 20:11:02.722469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:11:03.188091   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:11:03.204943   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:11:03.215434   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:11:03.215458   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:11:03.215506   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:11:03.225650   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:11:03.225713   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:11:03.236252   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:11:03.245425   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:11:03.245489   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:11:03.255564   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.264932   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:11:03.265014   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.274896   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:11:03.284027   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:11:03.284092   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:11:03.294368   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:11:03.377411   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:11:03.377509   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:11:03.537331   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:11:03.537590   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:11:03.537722   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:11:03.728458   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:11:03.730508   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:11:03.730621   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:11:03.730720   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:11:03.730869   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:11:03.730984   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:11:03.731082   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:11:03.731167   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:11:03.731258   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:11:03.731555   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:11:03.731896   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:11:03.732663   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:11:03.732953   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:11:03.733053   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:11:03.839927   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:11:03.988848   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:11:04.077497   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:11:04.213789   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:11:04.236317   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:11:04.237625   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:11:04.237719   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:11:04.399036   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:11:04.400624   73230 out.go:235]   - Booting up control plane ...
	I0906 20:11:04.400709   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:11:04.401417   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:11:04.402751   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:11:04.404122   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:11:04.407817   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:11:44.410273   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:11:44.410884   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:44.411132   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:49.411428   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:49.411674   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:59.412917   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:59.413182   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:19.414487   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:19.414692   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415457   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:59.415729   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415750   73230 kubeadm.go:310] 
	I0906 20:12:59.415808   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:12:59.415864   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:12:59.415874   73230 kubeadm.go:310] 
	I0906 20:12:59.415933   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:12:59.415979   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:12:59.416147   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:12:59.416167   73230 kubeadm.go:310] 
	I0906 20:12:59.416332   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:12:59.416372   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:12:59.416420   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:12:59.416428   73230 kubeadm.go:310] 
	I0906 20:12:59.416542   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:12:59.416650   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:12:59.416659   73230 kubeadm.go:310] 
	I0906 20:12:59.416818   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:12:59.416928   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:12:59.417030   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:12:59.417139   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:12:59.417153   73230 kubeadm.go:310] 
	I0906 20:12:59.417400   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:12:59.417485   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:12:59.417559   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 20:12:59.417626   73230 kubeadm.go:394] duration metric: took 8m3.018298427s to StartCluster
	I0906 20:12:59.417673   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:12:59.417741   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:12:59.464005   73230 cri.go:89] found id: ""
	I0906 20:12:59.464033   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.464040   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:12:59.464045   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:12:59.464101   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:12:59.504218   73230 cri.go:89] found id: ""
	I0906 20:12:59.504252   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.504264   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:12:59.504271   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:12:59.504327   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:12:59.541552   73230 cri.go:89] found id: ""
	I0906 20:12:59.541579   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.541589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:12:59.541596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:12:59.541663   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:12:59.580135   73230 cri.go:89] found id: ""
	I0906 20:12:59.580158   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.580168   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:12:59.580174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:12:59.580220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:12:59.622453   73230 cri.go:89] found id: ""
	I0906 20:12:59.622486   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.622498   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:12:59.622518   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:12:59.622587   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:12:59.661561   73230 cri.go:89] found id: ""
	I0906 20:12:59.661590   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.661601   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:12:59.661608   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:12:59.661668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:12:59.695703   73230 cri.go:89] found id: ""
	I0906 20:12:59.695732   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.695742   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:12:59.695749   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:12:59.695808   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:12:59.739701   73230 cri.go:89] found id: ""
	I0906 20:12:59.739733   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.739744   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:12:59.739756   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:12:59.739771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:12:59.791400   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:12:59.791428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:12:59.851142   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:12:59.851179   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:12:59.867242   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:12:59.867278   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:12:59.941041   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:12:59.941060   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:12:59.941071   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0906 20:13:00.061377   73230 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 20:13:00.061456   73230 out.go:270] * 
	W0906 20:13:00.061515   73230 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.061532   73230 out.go:270] * 
	W0906 20:13:00.062343   73230 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:13:00.065723   73230 out.go:201] 
	W0906 20:13:00.066968   73230 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.067028   73230 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 20:13:00.067059   73230 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 20:13:00.068497   73230 out.go:201] 
	
	
	==> CRI-O <==
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.842083726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653581842062146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2d6b968-6403-4dba-8f7f-16ff8c840baf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.842679966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=932fc782-06a8-419d-9f71-12873b3105e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.842787726Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=932fc782-06a8-419d-9f71-12873b3105e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.842824587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=932fc782-06a8-419d-9f71-12873b3105e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.879142670Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9f68b78-7507-4f8e-b5be-034f99dcc6d2 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.879261130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9f68b78-7507-4f8e-b5be-034f99dcc6d2 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.880639059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4cef1a2-fea0-4258-9e67-84f7ce5ff332 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.881294568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653581881266982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4cef1a2-fea0-4258-9e67-84f7ce5ff332 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.881998034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9ac4359-c243-460d-a22e-315264409f3c name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.882067027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9ac4359-c243-460d-a22e-315264409f3c name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.882115556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e9ac4359-c243-460d-a22e-315264409f3c name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.919522204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f22b99b-b412-49ab-8d7f-f4a7fbbcb698 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.919621045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f22b99b-b412-49ab-8d7f-f4a7fbbcb698 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.920966049Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a438612-f94b-49ba-a2cd-051f925aba24 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.921384742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653581921360071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a438612-f94b-49ba-a2cd-051f925aba24 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.922102204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bdaba52-4f12-441f-a9d9-93ba4eb60fbd name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.922192064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5bdaba52-4f12-441f-a9d9-93ba4eb60fbd name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.922238090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5bdaba52-4f12-441f-a9d9-93ba4eb60fbd name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.955143793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b060da92-3339-4a23-b7ae-5ac5d812c422 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.955232092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b060da92-3339-4a23-b7ae-5ac5d812c422 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.956305980Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82f642bd-4e12-485b-9fa8-c94d9188e1e3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.956661257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653581956642129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82f642bd-4e12-485b-9fa8-c94d9188e1e3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.957206656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cdee2da-cd04-4dcb-a627-22f3fb12fbdc name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.957275628Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cdee2da-cd04-4dcb-a627-22f3fb12fbdc name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:13:01 old-k8s-version-843298 crio[630]: time="2024-09-06 20:13:01.957308523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7cdee2da-cd04-4dcb-a627-22f3fb12fbdc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 6 20:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050933] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039157] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.987920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.571048] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.647123] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.681954] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.060444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073389] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.178170] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.167558] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.279257] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.753089] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.068747] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.083570] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[Sep 6 20:05] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 6 20:09] systemd-fstab-generator[5052]: Ignoring "noauto" option for root device
	[Sep 6 20:11] systemd-fstab-generator[5331]: Ignoring "noauto" option for root device
	[  +0.061919] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:13:02 up 8 min,  0 users,  load average: 0.01, 0.14, 0.11
	Linux old-k8s-version-843298 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000b4c630)
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]: goroutine 153 [select]:
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000957ef0, 0x4f0ac20, 0xc0009fea00, 0x1, 0xc0001000c0)
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002547e0, 0xc0001000c0)
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009cabc0, 0xc000a2ac00)
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5515]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 06 20:12:59 old-k8s-version-843298 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 06 20:12:59 old-k8s-version-843298 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 06 20:12:59 old-k8s-version-843298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 06 20:12:59 old-k8s-version-843298 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 06 20:12:59 old-k8s-version-843298 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5555]: I0906 20:12:59.812204    5555 server.go:416] Version: v1.20.0
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5555]: I0906 20:12:59.812573    5555 server.go:837] Client rotation is on, will bootstrap in background
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5555]: I0906 20:12:59.815167    5555 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5555]: I0906 20:12:59.816265    5555 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Sep 06 20:12:59 old-k8s-version-843298 kubelet[5555]: W0906 20:12:59.816407    5555 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 2 (248.241949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-843298" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (728.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0906 20:09:23.362344   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-458066 -n embed-certs-458066
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-06 20:18:22.242141813 +0000 UTC m=+6548.731895348
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-458066 -n embed-certs-458066
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-458066 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-458066 logs -n 25: (2.071312525s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-603826 sudo cat                              | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo find                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo crio                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-603826                                       | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-859361 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | disable-driver-mounts-859361                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:57 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-504385             | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-458066            | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653828  | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC | 06 Sep 24 19:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC |                     |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-504385                  | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-458066                 | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-843298        | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653828       | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-843298             | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 20:00:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:00:55.455816   73230 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:00:55.455933   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.455943   73230 out.go:358] Setting ErrFile to fd 2...
	I0906 20:00:55.455951   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.456141   73230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:00:55.456685   73230 out.go:352] Setting JSON to false
	I0906 20:00:55.457698   73230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6204,"bootTime":1725646651,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:00:55.457762   73230 start.go:139] virtualization: kvm guest
	I0906 20:00:55.459863   73230 out.go:177] * [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:00:55.461119   73230 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:00:55.461167   73230 notify.go:220] Checking for updates...
	I0906 20:00:55.463398   73230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:00:55.464573   73230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:00:55.465566   73230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:00:55.466605   73230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:00:55.467834   73230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:00:55.469512   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:00:55.470129   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.470183   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.484881   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0906 20:00:55.485238   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.485752   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.485776   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.486108   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.486296   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.488175   73230 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 20:00:55.489359   73230 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:00:55.489671   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.489705   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.504589   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0906 20:00:55.505047   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.505557   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.505581   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.505867   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.506018   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.541116   73230 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 20:00:55.542402   73230 start.go:297] selected driver: kvm2
	I0906 20:00:55.542423   73230 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-8
43298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.542548   73230 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:00:55.543192   73230 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.543257   73230 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:00:55.558465   73230 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:00:55.558833   73230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:00:55.558865   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:00:55.558875   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:00:55.558908   73230 start.go:340] cluster config:
	{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.559011   73230 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.561521   73230 out.go:177] * Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	I0906 20:00:55.309027   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:58.377096   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:55.562714   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:00:55.562760   73230 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:00:55.562773   73230 cache.go:56] Caching tarball of preloaded images
	I0906 20:00:55.562856   73230 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:00:55.562868   73230 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0906 20:00:55.562977   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:00:55.563173   73230 start.go:360] acquireMachinesLock for old-k8s-version-843298: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:01:04.457122   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:07.529093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:13.609120   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:16.681107   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:22.761164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:25.833123   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:31.913167   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:34.985108   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:41.065140   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:44.137176   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:50.217162   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:53.289137   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:59.369093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:02.441171   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:08.521164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:11.593164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:17.673124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:20.745159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:26.825154   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:29.897211   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:35.977181   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:39.049161   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:45.129172   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:48.201208   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:54.281103   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:57.353175   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:03.433105   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:06.505124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:12.585121   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:15.657169   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:21.737151   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:24.809135   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:30.889180   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:33.961145   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:40.041159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:43.113084   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:46.117237   72441 start.go:364] duration metric: took 4m28.485189545s to acquireMachinesLock for "embed-certs-458066"
	I0906 20:03:46.117298   72441 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:03:46.117309   72441 fix.go:54] fixHost starting: 
	I0906 20:03:46.117737   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:03:46.117773   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:03:46.132573   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0906 20:03:46.133029   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:03:46.133712   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:03:46.133743   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:03:46.134097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:03:46.134322   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:03:46.134505   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:03:46.136291   72441 fix.go:112] recreateIfNeeded on embed-certs-458066: state=Stopped err=<nil>
	I0906 20:03:46.136313   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	W0906 20:03:46.136466   72441 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:03:46.138544   72441 out.go:177] * Restarting existing kvm2 VM for "embed-certs-458066" ...
	I0906 20:03:46.139833   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Start
	I0906 20:03:46.140001   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring networks are active...
	I0906 20:03:46.140754   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network default is active
	I0906 20:03:46.141087   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network mk-embed-certs-458066 is active
	I0906 20:03:46.141402   72441 main.go:141] libmachine: (embed-certs-458066) Getting domain xml...
	I0906 20:03:46.142202   72441 main.go:141] libmachine: (embed-certs-458066) Creating domain...
	I0906 20:03:47.351460   72441 main.go:141] libmachine: (embed-certs-458066) Waiting to get IP...
	I0906 20:03:47.352248   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.352628   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.352699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.352597   73827 retry.go:31] will retry after 202.870091ms: waiting for machine to come up
	I0906 20:03:46.114675   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:03:46.114711   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115092   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:03:46.115118   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115306   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:03:46.117092   72322 machine.go:96] duration metric: took 4m37.429712277s to provisionDockerMachine
	I0906 20:03:46.117135   72322 fix.go:56] duration metric: took 4m37.451419912s for fixHost
	I0906 20:03:46.117144   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 4m37.45145595s
	W0906 20:03:46.117167   72322 start.go:714] error starting host: provision: host is not running
	W0906 20:03:46.117242   72322 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0906 20:03:46.117252   72322 start.go:729] Will try again in 5 seconds ...
	I0906 20:03:47.557228   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.557656   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.557682   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.557606   73827 retry.go:31] will retry after 357.664781ms: waiting for machine to come up
	I0906 20:03:47.917575   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.918041   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.918068   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.918005   73827 retry.go:31] will retry after 338.480268ms: waiting for machine to come up
	I0906 20:03:48.258631   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.259269   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.259305   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.259229   73827 retry.go:31] will retry after 554.173344ms: waiting for machine to come up
	I0906 20:03:48.814947   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.815491   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.815523   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.815449   73827 retry.go:31] will retry after 601.029419ms: waiting for machine to come up
	I0906 20:03:49.418253   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:49.418596   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:49.418623   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:49.418548   73827 retry.go:31] will retry after 656.451458ms: waiting for machine to come up
	I0906 20:03:50.076488   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:50.076908   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:50.076928   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:50.076875   73827 retry.go:31] will retry after 1.13800205s: waiting for machine to come up
	I0906 20:03:51.216380   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:51.216801   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:51.216831   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:51.216758   73827 retry.go:31] will retry after 1.071685673s: waiting for machine to come up
	I0906 20:03:52.289760   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:52.290174   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:52.290202   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:52.290125   73827 retry.go:31] will retry after 1.581761127s: waiting for machine to come up
	I0906 20:03:51.119269   72322 start.go:360] acquireMachinesLock for no-preload-504385: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:03:53.873755   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:53.874150   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:53.874184   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:53.874120   73827 retry.go:31] will retry after 1.99280278s: waiting for machine to come up
	I0906 20:03:55.869267   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:55.869747   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:55.869776   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:55.869685   73827 retry.go:31] will retry after 2.721589526s: waiting for machine to come up
	I0906 20:03:58.594012   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:58.594402   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:58.594428   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:58.594354   73827 retry.go:31] will retry after 2.763858077s: waiting for machine to come up
	I0906 20:04:01.359424   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:01.359775   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:04:01.359809   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:04:01.359736   73827 retry.go:31] will retry after 3.822567166s: waiting for machine to come up
	I0906 20:04:06.669858   72867 start.go:364] duration metric: took 4m9.363403512s to acquireMachinesLock for "default-k8s-diff-port-653828"
	I0906 20:04:06.669929   72867 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:06.669938   72867 fix.go:54] fixHost starting: 
	I0906 20:04:06.670353   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:06.670393   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:06.688290   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44215
	I0906 20:04:06.688752   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:06.689291   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:04:06.689314   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:06.689692   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:06.689886   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:06.690048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:04:06.691557   72867 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653828: state=Stopped err=<nil>
	I0906 20:04:06.691592   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	W0906 20:04:06.691742   72867 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:06.693924   72867 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653828" ...
	I0906 20:04:06.694965   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Start
	I0906 20:04:06.695148   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring networks are active...
	I0906 20:04:06.695900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network default is active
	I0906 20:04:06.696316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network mk-default-k8s-diff-port-653828 is active
	I0906 20:04:06.696698   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Getting domain xml...
	I0906 20:04:06.697469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Creating domain...
	I0906 20:04:05.186782   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187288   72441 main.go:141] libmachine: (embed-certs-458066) Found IP for machine: 192.168.39.118
	I0906 20:04:05.187301   72441 main.go:141] libmachine: (embed-certs-458066) Reserving static IP address...
	I0906 20:04:05.187340   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has current primary IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187764   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.187784   72441 main.go:141] libmachine: (embed-certs-458066) Reserved static IP address: 192.168.39.118
	I0906 20:04:05.187797   72441 main.go:141] libmachine: (embed-certs-458066) DBG | skip adding static IP to network mk-embed-certs-458066 - found existing host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"}
	I0906 20:04:05.187805   72441 main.go:141] libmachine: (embed-certs-458066) Waiting for SSH to be available...
	I0906 20:04:05.187848   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Getting to WaitForSSH function...
	I0906 20:04:05.190229   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190546   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.190576   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190643   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH client type: external
	I0906 20:04:05.190679   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa (-rw-------)
	I0906 20:04:05.190714   72441 main.go:141] libmachine: (embed-certs-458066) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:05.190727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | About to run SSH command:
	I0906 20:04:05.190761   72441 main.go:141] libmachine: (embed-certs-458066) DBG | exit 0
	I0906 20:04:05.317160   72441 main.go:141] libmachine: (embed-certs-458066) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:05.317483   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetConfigRaw
	I0906 20:04:05.318089   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.320559   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.320944   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.320971   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.321225   72441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/config.json ...
	I0906 20:04:05.321445   72441 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:05.321465   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:05.321720   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.323699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.323972   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.324009   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.324126   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.324303   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324444   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324561   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.324706   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.324940   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.324953   72441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:05.437192   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:05.437217   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437479   72441 buildroot.go:166] provisioning hostname "embed-certs-458066"
	I0906 20:04:05.437495   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437665   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.440334   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440705   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.440733   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440925   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.441100   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441260   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441405   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.441573   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.441733   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.441753   72441 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-458066 && echo "embed-certs-458066" | sudo tee /etc/hostname
	I0906 20:04:05.566958   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-458066
	
	I0906 20:04:05.566986   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.569652   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.569984   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.570014   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.570158   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.570342   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570504   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570648   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.570838   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.571042   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.571060   72441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-458066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-458066/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-458066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:05.689822   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:05.689855   72441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:05.689882   72441 buildroot.go:174] setting up certificates
	I0906 20:04:05.689891   72441 provision.go:84] configureAuth start
	I0906 20:04:05.689899   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.690182   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.692758   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693151   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.693172   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693308   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.695364   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.695754   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695909   72441 provision.go:143] copyHostCerts
	I0906 20:04:05.695957   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:05.695975   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:05.696042   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:05.696123   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:05.696130   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:05.696153   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:05.696248   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:05.696257   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:05.696280   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:05.696329   72441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-458066 san=[127.0.0.1 192.168.39.118 embed-certs-458066 localhost minikube]
	I0906 20:04:06.015593   72441 provision.go:177] copyRemoteCerts
	I0906 20:04:06.015656   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:06.015683   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.018244   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018598   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.018630   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018784   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.018990   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.019169   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.019278   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.110170   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 20:04:06.136341   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:06.161181   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:06.184758   72441 provision.go:87] duration metric: took 494.857261ms to configureAuth
	I0906 20:04:06.184786   72441 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:06.184986   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:06.185049   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.187564   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.187955   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.187978   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.188153   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.188399   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188571   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.188920   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.189070   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.189084   72441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:06.425480   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:06.425518   72441 machine.go:96] duration metric: took 1.104058415s to provisionDockerMachine
	I0906 20:04:06.425535   72441 start.go:293] postStartSetup for "embed-certs-458066" (driver="kvm2")
	I0906 20:04:06.425548   72441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:06.425572   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.425893   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:06.425919   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.428471   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428768   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.428794   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428928   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.429109   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.429283   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.429419   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.515180   72441 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:06.519357   72441 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:06.519390   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:06.519464   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:06.519540   72441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:06.519625   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:06.528542   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:06.552463   72441 start.go:296] duration metric: took 126.912829ms for postStartSetup
	I0906 20:04:06.552514   72441 fix.go:56] duration metric: took 20.435203853s for fixHost
	I0906 20:04:06.552540   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.554994   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555521   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.555556   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555739   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.555937   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556095   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556253   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.556409   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.556600   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.556613   72441 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:06.669696   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653046.632932221
	
	I0906 20:04:06.669720   72441 fix.go:216] guest clock: 1725653046.632932221
	I0906 20:04:06.669730   72441 fix.go:229] Guest: 2024-09-06 20:04:06.632932221 +0000 UTC Remote: 2024-09-06 20:04:06.552518521 +0000 UTC m=+289.061134864 (delta=80.4137ms)
	I0906 20:04:06.669761   72441 fix.go:200] guest clock delta is within tolerance: 80.4137ms
	I0906 20:04:06.669769   72441 start.go:83] releasing machines lock for "embed-certs-458066", held for 20.552490687s
	I0906 20:04:06.669801   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.670060   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:06.673015   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673405   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.673433   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673599   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674041   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674210   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674304   72441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:06.674351   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.674414   72441 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:06.674437   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.676916   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677063   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677314   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677341   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677481   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677513   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677686   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677691   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677864   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677878   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678013   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678025   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.678191   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.758176   72441 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:06.782266   72441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:06.935469   72441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:06.941620   72441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:06.941680   72441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:06.957898   72441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:06.957927   72441 start.go:495] detecting cgroup driver to use...
	I0906 20:04:06.957995   72441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:06.978574   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:06.993967   72441 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:06.994035   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:07.008012   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:07.022073   72441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:07.133622   72441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:07.291402   72441 docker.go:233] disabling docker service ...
	I0906 20:04:07.291478   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:07.306422   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:07.321408   72441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:07.442256   72441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:07.564181   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:07.579777   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:07.599294   72441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:07.599361   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.610457   72441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:07.610555   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.621968   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.633527   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.645048   72441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:07.659044   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.670526   72441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.689465   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.701603   72441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:07.712085   72441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:07.712144   72441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:07.728406   72441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:07.739888   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:07.862385   72441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:07.954721   72441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:07.954792   72441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:07.959478   72441 start.go:563] Will wait 60s for crictl version
	I0906 20:04:07.959545   72441 ssh_runner.go:195] Run: which crictl
	I0906 20:04:07.963893   72441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:08.003841   72441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:08.003917   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.032191   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.063563   72441 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:07.961590   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting to get IP...
	I0906 20:04:07.962441   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962859   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962923   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:07.962841   73982 retry.go:31] will retry after 292.508672ms: waiting for machine to come up
	I0906 20:04:08.257346   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257845   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257867   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.257815   73982 retry.go:31] will retry after 265.967606ms: waiting for machine to come up
	I0906 20:04:08.525352   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525907   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.525834   73982 retry.go:31] will retry after 308.991542ms: waiting for machine to come up
	I0906 20:04:08.836444   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837021   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837053   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.836973   73982 retry.go:31] will retry after 483.982276ms: waiting for machine to come up
	I0906 20:04:09.322661   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323161   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323184   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.323125   73982 retry.go:31] will retry after 574.860867ms: waiting for machine to come up
	I0906 20:04:09.899849   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900228   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.900187   73982 retry.go:31] will retry after 769.142372ms: waiting for machine to come up
	I0906 20:04:10.671316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671796   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671853   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:10.671771   73982 retry.go:31] will retry after 720.232224ms: waiting for machine to come up
	I0906 20:04:11.393120   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393502   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393534   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:11.393447   73982 retry.go:31] will retry after 975.812471ms: waiting for machine to come up
	I0906 20:04:08.064907   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:08.067962   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068410   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:08.068442   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068626   72441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:08.072891   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:08.086275   72441 kubeadm.go:883] updating cluster {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:08.086383   72441 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:08.086423   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:08.123100   72441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:08.123158   72441 ssh_runner.go:195] Run: which lz4
	I0906 20:04:08.127330   72441 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:08.131431   72441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:08.131466   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:09.584066   72441 crio.go:462] duration metric: took 1.456765631s to copy over tarball
	I0906 20:04:09.584131   72441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:11.751911   72441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.167751997s)
	I0906 20:04:11.751949   72441 crio.go:469] duration metric: took 2.167848466s to extract the tarball
	I0906 20:04:11.751959   72441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:11.790385   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:11.831973   72441 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:11.831995   72441 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:11.832003   72441 kubeadm.go:934] updating node { 192.168.39.118 8443 v1.31.0 crio true true} ...
	I0906 20:04:11.832107   72441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-458066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:11.832166   72441 ssh_runner.go:195] Run: crio config
	I0906 20:04:11.881946   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:11.881973   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:11.882000   72441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:11.882028   72441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-458066 NodeName:embed-certs-458066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:11.882186   72441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-458066"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:11.882266   72441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:11.892537   72441 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:11.892617   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:11.902278   72441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0906 20:04:11.920451   72441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:11.938153   72441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0906 20:04:11.957510   72441 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:11.961364   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:11.973944   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:12.109677   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:12.126348   72441 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066 for IP: 192.168.39.118
	I0906 20:04:12.126378   72441 certs.go:194] generating shared ca certs ...
	I0906 20:04:12.126399   72441 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:12.126562   72441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:12.126628   72441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:12.126642   72441 certs.go:256] generating profile certs ...
	I0906 20:04:12.126751   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/client.key
	I0906 20:04:12.126843   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key.c10a03b1
	I0906 20:04:12.126904   72441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key
	I0906 20:04:12.127063   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:12.127111   72441 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:12.127123   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:12.127153   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:12.127189   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:12.127218   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:12.127268   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:12.128117   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:12.185978   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:12.218124   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:12.254546   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:12.290098   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0906 20:04:12.317923   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:12.341186   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:12.363961   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:04:12.388000   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:12.418618   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:12.442213   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:12.465894   72441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:12.482404   72441 ssh_runner.go:195] Run: openssl version
	I0906 20:04:12.488370   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:12.499952   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504565   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504619   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.510625   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:12.522202   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:12.370306   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370743   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370779   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:12.370688   73982 retry.go:31] will retry after 1.559820467s: waiting for machine to come up
	I0906 20:04:13.932455   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933042   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933072   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:13.932985   73982 retry.go:31] will retry after 1.968766852s: waiting for machine to come up
	I0906 20:04:15.903304   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903826   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903855   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:15.903775   73982 retry.go:31] will retry after 2.738478611s: waiting for machine to come up
	I0906 20:04:12.533501   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538229   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538284   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.544065   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:12.555220   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:12.566402   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571038   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571093   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.577057   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:12.588056   72441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:12.592538   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:12.598591   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:12.604398   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:12.610502   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:12.616513   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:12.622859   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:12.628975   72441 kubeadm.go:392] StartCluster: {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:12.629103   72441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:12.629154   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.667699   72441 cri.go:89] found id: ""
	I0906 20:04:12.667764   72441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:12.678070   72441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:12.678092   72441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:12.678148   72441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:12.687906   72441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:12.688889   72441 kubeconfig.go:125] found "embed-certs-458066" server: "https://192.168.39.118:8443"
	I0906 20:04:12.690658   72441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:12.700591   72441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.118
	I0906 20:04:12.700623   72441 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:12.700635   72441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:12.700675   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.741471   72441 cri.go:89] found id: ""
	I0906 20:04:12.741553   72441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:12.757877   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:12.767729   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:12.767748   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:12.767800   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:12.777094   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:12.777157   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:12.786356   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:12.795414   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:12.795470   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:12.804727   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.813481   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:12.813534   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.822844   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:12.831877   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:12.831930   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:12.841082   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:12.850560   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:12.975888   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:13.850754   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.064392   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.140680   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.239317   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:14.239411   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:14.740313   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.240388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.740388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.755429   72441 api_server.go:72] duration metric: took 1.516111342s to wait for apiserver process to appear ...
	I0906 20:04:15.755462   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:15.755483   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.544772   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.544807   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.544824   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.596487   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.596546   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.755752   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.761917   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:18.761946   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.256512   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.265937   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.265973   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.756568   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.763581   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.763606   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:20.256237   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:20.262036   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:04:20.268339   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:20.268364   72441 api_server.go:131] duration metric: took 4.512894792s to wait for apiserver health ...
	I0906 20:04:20.268372   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:20.268378   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:20.270262   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:18.644597   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645056   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645088   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:18.644992   73982 retry.go:31] will retry after 2.982517528s: waiting for machine to come up
	I0906 20:04:21.631028   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631392   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631414   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:21.631367   73982 retry.go:31] will retry after 3.639469531s: waiting for machine to come up
	I0906 20:04:20.271474   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:20.282996   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:20.303957   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:20.315560   72441 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:20.315602   72441 system_pods.go:61] "coredns-6f6b679f8f-v6z7z" [b2c18dba-1210-4e95-a705-95abceca92f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:20.315611   72441 system_pods.go:61] "etcd-embed-certs-458066" [cf60e7c7-1801-42c7-be25-85242c22a5d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:20.315619   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [48c684ec-f93f-49ec-868b-6e7bc20ad506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:20.315625   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [1d55b520-2d8f-4517-a491-8193eaff5d89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:20.315631   72441 system_pods.go:61] "kube-proxy-crvq7" [f0610684-81ee-426a-adc2-aea80faab822] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:20.315639   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [d8744325-58f2-43a8-9a93-516b5a6fb989] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:20.315644   72441 system_pods.go:61] "metrics-server-6867b74b74-gtg94" [600e9c90-20db-407e-b586-fae3809d87b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:20.315649   72441 system_pods.go:61] "storage-provisioner" [1efe7188-2d33-4a29-afbe-823adbef73b3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:20.315657   72441 system_pods.go:74] duration metric: took 11.674655ms to wait for pod list to return data ...
	I0906 20:04:20.315665   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:20.318987   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:20.319012   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:20.319023   72441 node_conditions.go:105] duration metric: took 3.354197ms to run NodePressure ...
	I0906 20:04:20.319038   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:20.600925   72441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607562   72441 kubeadm.go:739] kubelet initialised
	I0906 20:04:20.607590   72441 kubeadm.go:740] duration metric: took 6.637719ms waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607602   72441 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:20.611592   72441 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:26.558023   73230 start.go:364] duration metric: took 3m30.994815351s to acquireMachinesLock for "old-k8s-version-843298"
	I0906 20:04:26.558087   73230 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:26.558096   73230 fix.go:54] fixHost starting: 
	I0906 20:04:26.558491   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:26.558542   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:26.576511   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0906 20:04:26.576933   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:26.577434   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:04:26.577460   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:26.577794   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:26.577968   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:26.578128   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetState
	I0906 20:04:26.579640   73230 fix.go:112] recreateIfNeeded on old-k8s-version-843298: state=Stopped err=<nil>
	I0906 20:04:26.579674   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	W0906 20:04:26.579829   73230 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:26.581843   73230 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-843298" ...
	I0906 20:04:25.275406   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275902   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Found IP for machine: 192.168.50.16
	I0906 20:04:25.275942   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has current primary IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275955   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserving static IP address...
	I0906 20:04:25.276431   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.276463   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserved static IP address: 192.168.50.16
	I0906 20:04:25.276482   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | skip adding static IP to network mk-default-k8s-diff-port-653828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"}
	I0906 20:04:25.276493   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for SSH to be available...
	I0906 20:04:25.276512   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Getting to WaitForSSH function...
	I0906 20:04:25.278727   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279006   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.279037   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279196   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH client type: external
	I0906 20:04:25.279234   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa (-rw-------)
	I0906 20:04:25.279289   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:25.279312   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | About to run SSH command:
	I0906 20:04:25.279330   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | exit 0
	I0906 20:04:25.405134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:25.405524   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetConfigRaw
	I0906 20:04:25.406134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.408667   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409044   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.409074   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409332   72867 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/config.json ...
	I0906 20:04:25.409513   72867 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:25.409530   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:25.409724   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.411737   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412027   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.412060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412171   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.412362   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412489   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412662   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.412802   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.413045   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.413059   72867 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:25.513313   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:25.513343   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513613   72867 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653828"
	I0906 20:04:25.513644   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513851   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.516515   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.516847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.516895   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.517116   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.517300   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517461   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517574   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.517712   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.517891   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.517905   72867 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653828 && echo "default-k8s-diff-port-653828" | sudo tee /etc/hostname
	I0906 20:04:25.637660   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653828
	
	I0906 20:04:25.637691   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.640258   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640600   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.640626   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640811   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.641001   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641177   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641333   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.641524   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.641732   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.641754   72867 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:25.749746   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:25.749773   72867 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:25.749795   72867 buildroot.go:174] setting up certificates
	I0906 20:04:25.749812   72867 provision.go:84] configureAuth start
	I0906 20:04:25.749828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.750111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.752528   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.752893   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.752920   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.753104   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.755350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755642   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.755666   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755808   72867 provision.go:143] copyHostCerts
	I0906 20:04:25.755858   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:25.755875   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:25.755930   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:25.756017   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:25.756024   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:25.756046   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:25.756129   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:25.756137   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:25.756155   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:25.756212   72867 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653828 san=[127.0.0.1 192.168.50.16 default-k8s-diff-port-653828 localhost minikube]
	I0906 20:04:25.934931   72867 provision.go:177] copyRemoteCerts
	I0906 20:04:25.935018   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:25.935060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.937539   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.937899   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.937925   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.938111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.938308   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.938469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.938644   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.019666   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:26.043989   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0906 20:04:26.066845   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:04:26.090526   72867 provision.go:87] duration metric: took 340.698646ms to configureAuth
	I0906 20:04:26.090561   72867 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:26.090786   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:26.090878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.093783   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094167   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.094201   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094503   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.094689   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094850   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094975   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.095130   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.095357   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.095389   72867 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:26.324270   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:26.324301   72867 machine.go:96] duration metric: took 914.775498ms to provisionDockerMachine
	I0906 20:04:26.324315   72867 start.go:293] postStartSetup for "default-k8s-diff-port-653828" (driver="kvm2")
	I0906 20:04:26.324328   72867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:26.324350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.324726   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:26.324759   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.327339   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327718   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.327750   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.328147   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.328309   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.328449   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.408475   72867 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:26.413005   72867 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:26.413033   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:26.413107   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:26.413203   72867 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:26.413320   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:26.422811   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:26.449737   72867 start.go:296] duration metric: took 125.408167ms for postStartSetup
	I0906 20:04:26.449772   72867 fix.go:56] duration metric: took 19.779834553s for fixHost
	I0906 20:04:26.449792   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.452589   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.452990   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.453022   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.453323   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.453529   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453710   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.453966   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.454125   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.454136   72867 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:26.557844   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653066.531604649
	
	I0906 20:04:26.557875   72867 fix.go:216] guest clock: 1725653066.531604649
	I0906 20:04:26.557884   72867 fix.go:229] Guest: 2024-09-06 20:04:26.531604649 +0000 UTC Remote: 2024-09-06 20:04:26.449775454 +0000 UTC m=+269.281822801 (delta=81.829195ms)
	I0906 20:04:26.557904   72867 fix.go:200] guest clock delta is within tolerance: 81.829195ms
	I0906 20:04:26.557909   72867 start.go:83] releasing machines lock for "default-k8s-diff-port-653828", held for 19.888002519s
	I0906 20:04:26.557943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.558256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:26.561285   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561705   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.561732   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562425   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562628   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562732   72867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:26.562782   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.562920   72867 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:26.562950   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.565587   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.565970   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566149   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566331   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.566542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.566605   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566633   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566744   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.566756   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566992   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.567145   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.567302   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.672529   72867 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:26.678762   72867 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:26.825625   72867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:26.832290   72867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:26.832363   72867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:26.848802   72867 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:26.848824   72867 start.go:495] detecting cgroup driver to use...
	I0906 20:04:26.848917   72867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:26.864986   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:26.878760   72867 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:26.878813   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:26.893329   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:26.909090   72867 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:27.025534   72867 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:27.190190   72867 docker.go:233] disabling docker service ...
	I0906 20:04:27.190293   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:22.617468   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:24.618561   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.118448   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.204700   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:27.217880   72867 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:27.346599   72867 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:27.466601   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:27.480785   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:27.501461   72867 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:27.501523   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.511815   72867 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:27.511868   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.521806   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.532236   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.542227   72867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:27.552389   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.563462   72867 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.583365   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.594465   72867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:27.605074   72867 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:27.605140   72867 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:27.618702   72867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:27.630566   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:27.748387   72867 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:27.841568   72867 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:27.841652   72867 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:27.846880   72867 start.go:563] Will wait 60s for crictl version
	I0906 20:04:27.846936   72867 ssh_runner.go:195] Run: which crictl
	I0906 20:04:27.851177   72867 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:27.895225   72867 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:27.895327   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.934388   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.966933   72867 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:26.583194   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .Start
	I0906 20:04:26.583341   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring networks are active...
	I0906 20:04:26.584046   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network default is active
	I0906 20:04:26.584420   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network mk-old-k8s-version-843298 is active
	I0906 20:04:26.584851   73230 main.go:141] libmachine: (old-k8s-version-843298) Getting domain xml...
	I0906 20:04:26.585528   73230 main.go:141] libmachine: (old-k8s-version-843298) Creating domain...
	I0906 20:04:27.874281   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting to get IP...
	I0906 20:04:27.875189   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:27.875762   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:27.875844   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:27.875754   74166 retry.go:31] will retry after 289.364241ms: waiting for machine to come up
	I0906 20:04:28.166932   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.167349   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.167375   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.167303   74166 retry.go:31] will retry after 317.106382ms: waiting for machine to come up
	I0906 20:04:28.485664   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.486147   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.486241   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.486199   74166 retry.go:31] will retry after 401.712201ms: waiting for machine to come up
	I0906 20:04:28.890039   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.890594   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.890621   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.890540   74166 retry.go:31] will retry after 570.418407ms: waiting for machine to come up
	I0906 20:04:29.462983   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:29.463463   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:29.463489   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:29.463428   74166 retry.go:31] will retry after 696.361729ms: waiting for machine to come up
	I0906 20:04:30.161305   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:30.161829   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:30.161876   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:30.161793   74166 retry.go:31] will retry after 896.800385ms: waiting for machine to come up
	I0906 20:04:27.968123   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:27.971448   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.971880   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:27.971904   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.972128   72867 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:27.981160   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:27.994443   72867 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653
828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:27.994575   72867 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:27.994635   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:28.043203   72867 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:28.043285   72867 ssh_runner.go:195] Run: which lz4
	I0906 20:04:28.048798   72867 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:28.053544   72867 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:28.053577   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:29.490070   72867 crio.go:462] duration metric: took 1.441303819s to copy over tarball
	I0906 20:04:29.490142   72867 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:31.649831   72867 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159650072s)
	I0906 20:04:31.649870   72867 crio.go:469] duration metric: took 2.159772826s to extract the tarball
	I0906 20:04:31.649880   72867 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:31.686875   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:31.729557   72867 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:31.729580   72867 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:31.729587   72867 kubeadm.go:934] updating node { 192.168.50.16 8444 v1.31.0 crio true true} ...
	I0906 20:04:31.729698   72867 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:31.729799   72867 ssh_runner.go:195] Run: crio config
	I0906 20:04:31.777272   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:31.777299   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:31.777316   72867 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:31.777336   72867 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.16 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653828 NodeName:default-k8s-diff-port-653828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:31.777509   72867 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.16
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653828"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:31.777577   72867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:31.788008   72867 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:31.788070   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:31.798261   72867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0906 20:04:31.815589   72867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:31.832546   72867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0906 20:04:31.849489   72867 ssh_runner.go:195] Run: grep 192.168.50.16	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:31.853452   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:31.866273   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:31.984175   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:32.001110   72867 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828 for IP: 192.168.50.16
	I0906 20:04:32.001139   72867 certs.go:194] generating shared ca certs ...
	I0906 20:04:32.001160   72867 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:32.001343   72867 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:32.001399   72867 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:32.001413   72867 certs.go:256] generating profile certs ...
	I0906 20:04:32.001509   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/client.key
	I0906 20:04:32.001613   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key.01951d83
	I0906 20:04:32.001665   72867 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key
	I0906 20:04:32.001815   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:32.001866   72867 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:32.001880   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:32.001913   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:32.001933   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:32.001962   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:32.002001   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:32.002812   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:32.037177   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:32.078228   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:32.117445   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:32.153039   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 20:04:32.186458   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:28.120786   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:28.120826   72441 pod_ready.go:82] duration metric: took 7.509209061s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:28.120842   72441 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:30.129518   72441 pod_ready.go:103] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:31.059799   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.060272   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.060294   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.060226   74166 retry.go:31] will retry after 841.627974ms: waiting for machine to come up
	I0906 20:04:31.903823   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.904258   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.904280   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.904238   74166 retry.go:31] will retry after 1.274018797s: waiting for machine to come up
	I0906 20:04:33.179723   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:33.180090   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:33.180133   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:33.180059   74166 retry.go:31] will retry after 1.496142841s: waiting for machine to come up
	I0906 20:04:34.678209   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:34.678697   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:34.678726   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:34.678652   74166 retry.go:31] will retry after 1.795101089s: waiting for machine to come up
	I0906 20:04:32.216815   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:32.245378   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:32.272163   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:32.297017   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:32.321514   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:32.345724   72867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:32.362488   72867 ssh_runner.go:195] Run: openssl version
	I0906 20:04:32.368722   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:32.380099   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384777   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384834   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.392843   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:32.405716   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:32.417043   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422074   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422143   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.427946   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:32.439430   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:32.450466   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455056   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455114   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.460970   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:32.471978   72867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:32.476838   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:32.483008   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:32.489685   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:32.496446   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:32.502841   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:32.509269   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:32.515687   72867 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:32.515791   72867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:32.515853   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.567687   72867 cri.go:89] found id: ""
	I0906 20:04:32.567763   72867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:32.578534   72867 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:32.578552   72867 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:32.578598   72867 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:32.588700   72867 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:32.589697   72867 kubeconfig.go:125] found "default-k8s-diff-port-653828" server: "https://192.168.50.16:8444"
	I0906 20:04:32.591739   72867 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:32.601619   72867 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.16
	I0906 20:04:32.601649   72867 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:32.601659   72867 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:32.601724   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.640989   72867 cri.go:89] found id: ""
	I0906 20:04:32.641056   72867 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:32.659816   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:32.670238   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:32.670274   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:32.670327   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:04:32.679687   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:32.679778   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:32.689024   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:04:32.698403   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:32.698465   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:32.707806   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.717015   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:32.717105   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.726408   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:04:32.735461   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:32.735538   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:32.744701   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:32.754202   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:32.874616   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.759668   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.984693   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.051998   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.155274   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:34.155384   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:34.655749   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.156069   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.656120   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.672043   72867 api_server.go:72] duration metric: took 1.516769391s to wait for apiserver process to appear ...
	I0906 20:04:35.672076   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:35.672099   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:32.628208   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.628235   72441 pod_ready.go:82] duration metric: took 4.507383414s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.628248   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633941   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.633965   72441 pod_ready.go:82] duration metric: took 5.709738ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633975   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639227   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.639249   72441 pod_ready.go:82] duration metric: took 5.26842ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639259   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644664   72441 pod_ready.go:93] pod "kube-proxy-crvq7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.644690   72441 pod_ready.go:82] duration metric: took 5.423551ms for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644701   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650000   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.650022   72441 pod_ready.go:82] duration metric: took 5.312224ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650034   72441 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:34.657709   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:37.157744   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:38.092386   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.092429   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.092448   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.129071   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.129110   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.172277   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.213527   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.213573   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:38.673103   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.677672   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.677704   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.172237   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.179638   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:39.179670   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.672801   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.678523   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:04:39.688760   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:39.688793   72867 api_server.go:131] duration metric: took 4.016709147s to wait for apiserver health ...
	I0906 20:04:39.688804   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:39.688812   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:39.690721   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:36.474937   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:36.475399   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:36.475497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:36.475351   74166 retry.go:31] will retry after 1.918728827s: waiting for machine to come up
	I0906 20:04:38.397024   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:38.397588   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:38.397617   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:38.397534   74166 retry.go:31] will retry after 3.460427722s: waiting for machine to come up
	I0906 20:04:39.692055   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:39.707875   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:39.728797   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:39.740514   72867 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:39.740553   72867 system_pods.go:61] "coredns-6f6b679f8f-mvwth" [53675f76-d849-471c-9cd1-561e2f8e6499] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:39.740562   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [f69c9488-87d4-487e-902b-588182c2e2e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:39.740567   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [d641f983-776e-4102-81a3-ba3cf49911a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:39.740579   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [1b09e88d-b038-42d3-9c36-4eee1eff1c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:39.740585   72867 system_pods.go:61] "kube-proxy-9wlq4" [5254a977-ded3-439d-8db0-cd54ccd96940] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:39.740590   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [f8c16cf5-2c76-428f-83de-e79c49566683] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:39.740594   72867 system_pods.go:61] "metrics-server-6867b74b74-dds56" [6219eb1e-2904-487c-b4ed-d786a0627281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:39.740598   72867 system_pods.go:61] "storage-provisioner" [58dd82cd-e250-4f57-97ad-55408f001cc3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:39.740605   72867 system_pods.go:74] duration metric: took 11.784722ms to wait for pod list to return data ...
	I0906 20:04:39.740614   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:39.745883   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:39.745913   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:39.745923   72867 node_conditions.go:105] duration metric: took 5.304169ms to run NodePressure ...
	I0906 20:04:39.745945   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:40.031444   72867 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036537   72867 kubeadm.go:739] kubelet initialised
	I0906 20:04:40.036556   72867 kubeadm.go:740] duration metric: took 5.087185ms waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036563   72867 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:40.044926   72867 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:42.050947   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:39.657641   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:42.156327   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:41.860109   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:41.860612   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:41.860640   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:41.860560   74166 retry.go:31] will retry after 4.509018672s: waiting for machine to come up
	I0906 20:04:44.051148   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.554068   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:44.157427   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.656559   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:47.793833   72322 start.go:364] duration metric: took 56.674519436s to acquireMachinesLock for "no-preload-504385"
	I0906 20:04:47.793890   72322 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:47.793898   72322 fix.go:54] fixHost starting: 
	I0906 20:04:47.794329   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:47.794363   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:47.812048   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0906 20:04:47.812496   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:47.813081   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:04:47.813109   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:47.813446   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:47.813741   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:04:47.813945   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:04:47.815314   72322 fix.go:112] recreateIfNeeded on no-preload-504385: state=Stopped err=<nil>
	I0906 20:04:47.815338   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	W0906 20:04:47.815507   72322 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:47.817424   72322 out.go:177] * Restarting existing kvm2 VM for "no-preload-504385" ...
	I0906 20:04:47.818600   72322 main.go:141] libmachine: (no-preload-504385) Calling .Start
	I0906 20:04:47.818760   72322 main.go:141] libmachine: (no-preload-504385) Ensuring networks are active...
	I0906 20:04:47.819569   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network default is active
	I0906 20:04:47.819883   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network mk-no-preload-504385 is active
	I0906 20:04:47.820233   72322 main.go:141] libmachine: (no-preload-504385) Getting domain xml...
	I0906 20:04:47.821002   72322 main.go:141] libmachine: (no-preload-504385) Creating domain...
	I0906 20:04:46.374128   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374599   73230 main.go:141] libmachine: (old-k8s-version-843298) Found IP for machine: 192.168.72.30
	I0906 20:04:46.374629   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has current primary IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374642   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserving static IP address...
	I0906 20:04:46.375045   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.375071   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | skip adding static IP to network mk-old-k8s-version-843298 - found existing host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"}
	I0906 20:04:46.375081   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserved static IP address: 192.168.72.30
	I0906 20:04:46.375104   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting for SSH to be available...
	I0906 20:04:46.375119   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Getting to WaitForSSH function...
	I0906 20:04:46.377497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377836   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.377883   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377956   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH client type: external
	I0906 20:04:46.377982   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa (-rw-------)
	I0906 20:04:46.378028   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:46.378044   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | About to run SSH command:
	I0906 20:04:46.378054   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | exit 0
	I0906 20:04:46.505025   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:46.505386   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 20:04:46.506031   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.508401   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.508787   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.508827   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.509092   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:04:46.509321   73230 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:46.509339   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:46.509549   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.511816   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512230   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.512265   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512436   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.512618   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512794   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512932   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.513123   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.513364   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.513378   73230 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:46.629437   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:46.629469   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629712   73230 buildroot.go:166] provisioning hostname "old-k8s-version-843298"
	I0906 20:04:46.629731   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629910   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.632226   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632620   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.632653   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632817   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.633009   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633204   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633364   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.633544   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.633758   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.633779   73230 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-843298 && echo "old-k8s-version-843298" | sudo tee /etc/hostname
	I0906 20:04:46.764241   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-843298
	
	I0906 20:04:46.764271   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.766678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767063   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.767092   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767236   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.767414   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767591   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767740   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.767874   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.768069   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.768088   73230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-843298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-843298/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-843298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:46.890399   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:46.890424   73230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:46.890461   73230 buildroot.go:174] setting up certificates
	I0906 20:04:46.890471   73230 provision.go:84] configureAuth start
	I0906 20:04:46.890479   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.890714   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.893391   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893765   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.893802   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893942   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.896173   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896505   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.896524   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896688   73230 provision.go:143] copyHostCerts
	I0906 20:04:46.896741   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:46.896756   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:46.896814   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:46.896967   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:46.896977   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:46.897008   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:46.897096   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:46.897104   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:46.897133   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:46.897193   73230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-843298 san=[127.0.0.1 192.168.72.30 localhost minikube old-k8s-version-843298]
	I0906 20:04:47.128570   73230 provision.go:177] copyRemoteCerts
	I0906 20:04:47.128627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:47.128653   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.131548   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.131952   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.131981   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.132164   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.132396   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.132571   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.132705   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.223745   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:47.249671   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0906 20:04:47.274918   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:47.300351   73230 provision.go:87] duration metric: took 409.869395ms to configureAuth
	I0906 20:04:47.300376   73230 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:47.300584   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:04:47.300673   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.303255   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303559   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.303581   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303739   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.303943   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304098   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304266   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.304407   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.304623   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.304644   73230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:47.539793   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:47.539824   73230 machine.go:96] duration metric: took 1.030489839s to provisionDockerMachine
	I0906 20:04:47.539836   73230 start.go:293] postStartSetup for "old-k8s-version-843298" (driver="kvm2")
	I0906 20:04:47.539849   73230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:47.539884   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.540193   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:47.540220   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.543190   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543482   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.543506   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543707   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.543938   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.544097   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.544243   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.633100   73230 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:47.637336   73230 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:47.637368   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:47.637459   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:47.637541   73230 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:47.637627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:47.648442   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:47.672907   73230 start.go:296] duration metric: took 133.055727ms for postStartSetup
	I0906 20:04:47.672951   73230 fix.go:56] duration metric: took 21.114855209s for fixHost
	I0906 20:04:47.672978   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.675459   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.675833   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.675863   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.676005   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.676303   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676471   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676661   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.676846   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.677056   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.677070   73230 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:47.793647   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653087.750926682
	
	I0906 20:04:47.793671   73230 fix.go:216] guest clock: 1725653087.750926682
	I0906 20:04:47.793681   73230 fix.go:229] Guest: 2024-09-06 20:04:47.750926682 +0000 UTC Remote: 2024-09-06 20:04:47.67295613 +0000 UTC m=+232.250384025 (delta=77.970552ms)
	I0906 20:04:47.793735   73230 fix.go:200] guest clock delta is within tolerance: 77.970552ms
	I0906 20:04:47.793746   73230 start.go:83] releasing machines lock for "old-k8s-version-843298", held for 21.235682628s
	I0906 20:04:47.793778   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.794059   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:47.796792   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797195   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.797229   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797425   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798019   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798230   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798314   73230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:47.798360   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.798488   73230 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:47.798509   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.801253   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801632   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.801658   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801867   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802060   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802122   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.802152   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.802210   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802318   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802460   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802504   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.802580   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802722   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.886458   73230 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:47.910204   73230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:48.055661   73230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:48.063024   73230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:48.063090   73230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:48.084749   73230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:48.084771   73230 start.go:495] detecting cgroup driver to use...
	I0906 20:04:48.084892   73230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:48.105494   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:48.123487   73230 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:48.123564   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:48.145077   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:48.161336   73230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:48.283568   73230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:48.445075   73230 docker.go:233] disabling docker service ...
	I0906 20:04:48.445146   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:48.461122   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:48.475713   73230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:48.632804   73230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:48.762550   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:48.778737   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:48.798465   73230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:04:48.798549   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.811449   73230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:48.811523   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.824192   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.835598   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.847396   73230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:48.860005   73230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:48.871802   73230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:48.871864   73230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:48.887596   73230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:48.899508   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:49.041924   73230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:49.144785   73230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:49.144885   73230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:49.150404   73230 start.go:563] Will wait 60s for crictl version
	I0906 20:04:49.150461   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:49.154726   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:49.202450   73230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:49.202557   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.235790   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.270094   73230 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0906 20:04:49.271457   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:49.274710   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275114   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:49.275139   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275475   73230 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:49.280437   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:49.293664   73230 kubeadm.go:883] updating cluster {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:49.293793   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:04:49.293842   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:49.348172   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:49.348251   73230 ssh_runner.go:195] Run: which lz4
	I0906 20:04:49.352703   73230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:49.357463   73230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:49.357501   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0906 20:04:49.056116   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:51.553185   72867 pod_ready.go:93] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.553217   72867 pod_ready.go:82] duration metric: took 11.508264695s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.553231   72867 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563758   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.563788   72867 pod_ready.go:82] duration metric: took 10.547437ms for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563802   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570906   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.570940   72867 pod_ready.go:82] duration metric: took 7.128595ms for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570957   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:48.657527   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:50.662561   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:49.146755   72322 main.go:141] libmachine: (no-preload-504385) Waiting to get IP...
	I0906 20:04:49.147780   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.148331   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.148406   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.148309   74322 retry.go:31] will retry after 250.314453ms: waiting for machine to come up
	I0906 20:04:49.399920   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.400386   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.400468   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.400345   74322 retry.go:31] will retry after 247.263156ms: waiting for machine to come up
	I0906 20:04:49.648894   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.649420   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.649445   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.649376   74322 retry.go:31] will retry after 391.564663ms: waiting for machine to come up
	I0906 20:04:50.043107   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.043594   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.043617   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.043548   74322 retry.go:31] will retry after 513.924674ms: waiting for machine to come up
	I0906 20:04:50.559145   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.559637   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.559675   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.559543   74322 retry.go:31] will retry after 551.166456ms: waiting for machine to come up
	I0906 20:04:51.111906   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.112967   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.112999   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.112921   74322 retry.go:31] will retry after 653.982425ms: waiting for machine to come up
	I0906 20:04:51.768950   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.769466   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.769496   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.769419   74322 retry.go:31] will retry after 935.670438ms: waiting for machine to come up
	I0906 20:04:52.706493   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:52.707121   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:52.707152   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:52.707062   74322 retry.go:31] will retry after 1.141487289s: waiting for machine to come up
	I0906 20:04:51.190323   73230 crio.go:462] duration metric: took 1.837657617s to copy over tarball
	I0906 20:04:51.190410   73230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:54.320754   73230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.130319477s)
	I0906 20:04:54.320778   73230 crio.go:469] duration metric: took 3.130424981s to extract the tarball
	I0906 20:04:54.320785   73230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:54.388660   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:54.427475   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:54.427505   73230 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:04:54.427580   73230 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.427594   73230 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.427611   73230 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.427662   73230 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.427691   73230 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.427696   73230 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.427813   73230 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.427672   73230 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0906 20:04:54.429432   73230 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.429443   73230 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.429447   73230 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.429448   73230 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.429475   73230 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.429449   73230 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.429496   73230 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.429589   73230 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 20:04:54.603502   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.607745   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.610516   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.613580   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.616591   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.622381   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 20:04:54.636746   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.690207   73230 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0906 20:04:54.690254   73230 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.690306   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.788758   73230 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0906 20:04:54.788804   73230 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.788876   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.804173   73230 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0906 20:04:54.804228   73230 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.804273   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817005   73230 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0906 20:04:54.817056   73230 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.817074   73230 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0906 20:04:54.817101   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817122   73230 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.817138   73230 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 20:04:54.817167   73230 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 20:04:54.817202   73230 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0906 20:04:54.817213   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817220   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.817227   73230 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.817168   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817253   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817301   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.817333   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902264   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.902422   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902522   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.902569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.902602   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.902654   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:54.902708   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.061686   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.073933   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.085364   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:55.085463   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.085399   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.085610   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:55.085725   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.192872   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:55.196085   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.255204   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.288569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.291461   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0906 20:04:55.291541   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.291559   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0906 20:04:55.291726   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0906 20:04:53.578469   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.578504   72867 pod_ready.go:82] duration metric: took 2.007539423s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.578534   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583560   72867 pod_ready.go:93] pod "kube-proxy-9wlq4" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.583583   72867 pod_ready.go:82] duration metric: took 5.037068ms for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583594   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832422   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:54.832453   72867 pod_ready.go:82] duration metric: took 1.248849975s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832480   72867 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:56.840031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.156842   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:55.236051   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.849822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:53.850213   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:53.850235   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:53.850178   74322 retry.go:31] will retry after 1.858736556s: waiting for machine to come up
	I0906 20:04:55.710052   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:55.710550   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:55.710598   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:55.710496   74322 retry.go:31] will retry after 2.033556628s: waiting for machine to come up
	I0906 20:04:57.745989   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:57.746433   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:57.746459   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:57.746388   74322 retry.go:31] will retry after 1.985648261s: waiting for machine to come up
	I0906 20:04:55.500590   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0906 20:04:55.500702   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 20:04:55.500740   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0906 20:04:55.500824   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0906 20:04:55.500885   73230 cache_images.go:92] duration metric: took 1.07336017s to LoadCachedImages
	W0906 20:04:55.500953   73230 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0906 20:04:55.500969   73230 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.20.0 crio true true} ...
	I0906 20:04:55.501112   73230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-843298 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:55.501192   73230 ssh_runner.go:195] Run: crio config
	I0906 20:04:55.554097   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:04:55.554119   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:55.554135   73230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:55.554154   73230 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-843298 NodeName:old-k8s-version-843298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 20:04:55.554359   73230 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-843298"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:55.554441   73230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0906 20:04:55.565923   73230 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:55.566004   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:55.577366   73230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0906 20:04:55.595470   73230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:55.614641   73230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0906 20:04:55.637739   73230 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:55.642233   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:55.658409   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:55.804327   73230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:55.824288   73230 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298 for IP: 192.168.72.30
	I0906 20:04:55.824308   73230 certs.go:194] generating shared ca certs ...
	I0906 20:04:55.824323   73230 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:55.824479   73230 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:55.824541   73230 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:55.824560   73230 certs.go:256] generating profile certs ...
	I0906 20:04:55.824680   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key
	I0906 20:04:55.824755   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3
	I0906 20:04:55.824799   73230 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key
	I0906 20:04:55.824952   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:55.824995   73230 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:55.825008   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:55.825041   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:55.825072   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:55.825102   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:55.825158   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:55.825878   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:55.868796   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:55.905185   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:55.935398   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:55.973373   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 20:04:56.008496   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:04:56.046017   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:56.080049   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:56.122717   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:56.151287   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:56.184273   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:56.216780   73230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:56.239708   73230 ssh_runner.go:195] Run: openssl version
	I0906 20:04:56.246127   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:56.257597   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262515   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262594   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.269207   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:56.281646   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:56.293773   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299185   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299255   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.305740   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:56.319060   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:56.330840   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336013   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336082   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.342576   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:56.354648   73230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:56.359686   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:56.366321   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:56.372646   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:56.379199   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:56.386208   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:56.392519   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:56.399335   73230 kubeadm.go:392] StartCluster: {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:56.399442   73230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:56.399495   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.441986   73230 cri.go:89] found id: ""
	I0906 20:04:56.442069   73230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:56.454884   73230 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:56.454907   73230 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:56.454977   73230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:56.465647   73230 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:56.466650   73230 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:04:56.467285   73230 kubeconfig.go:62] /home/jenkins/minikube-integration/19576-6021/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-843298" cluster setting kubeconfig missing "old-k8s-version-843298" context setting]
	I0906 20:04:56.468248   73230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:56.565587   73230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:56.576221   73230 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.30
	I0906 20:04:56.576261   73230 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:56.576277   73230 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:56.576342   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.621597   73230 cri.go:89] found id: ""
	I0906 20:04:56.621663   73230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:56.639924   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:56.649964   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:56.649989   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:56.650042   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:56.661290   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:56.661343   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:56.671361   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:56.680865   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:56.680939   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:56.696230   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.706613   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:56.706692   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.719635   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:56.729992   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:56.730045   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:56.740040   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:56.750666   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:56.891897   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.681824   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.972206   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.091751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.206345   73230 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:58.206443   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:58.707412   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.206780   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.707273   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:00.207218   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.340092   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:01.838387   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:57.658033   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:00.157741   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:59.734045   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:59.734565   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:59.734592   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:59.734506   74322 retry.go:31] will retry after 2.767491398s: waiting for machine to come up
	I0906 20:05:02.505314   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:02.505749   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:05:02.505780   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:05:02.505697   74322 retry.go:31] will retry after 3.51382931s: waiting for machine to come up
	I0906 20:05:00.707010   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.206708   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.707125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.207349   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.706670   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.207287   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.706650   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.207125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.707193   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:05.207119   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.838639   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:05.839195   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:02.655906   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:04.656677   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:07.157732   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:06.023595   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024063   72322 main.go:141] libmachine: (no-preload-504385) Found IP for machine: 192.168.61.184
	I0906 20:05:06.024095   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has current primary IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024105   72322 main.go:141] libmachine: (no-preload-504385) Reserving static IP address...
	I0906 20:05:06.024576   72322 main.go:141] libmachine: (no-preload-504385) Reserved static IP address: 192.168.61.184
	I0906 20:05:06.024598   72322 main.go:141] libmachine: (no-preload-504385) Waiting for SSH to be available...
	I0906 20:05:06.024621   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.024643   72322 main.go:141] libmachine: (no-preload-504385) DBG | skip adding static IP to network mk-no-preload-504385 - found existing host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"}
	I0906 20:05:06.024666   72322 main.go:141] libmachine: (no-preload-504385) DBG | Getting to WaitForSSH function...
	I0906 20:05:06.026845   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027166   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.027219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027296   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH client type: external
	I0906 20:05:06.027321   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa (-rw-------)
	I0906 20:05:06.027355   72322 main.go:141] libmachine: (no-preload-504385) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:05:06.027376   72322 main.go:141] libmachine: (no-preload-504385) DBG | About to run SSH command:
	I0906 20:05:06.027403   72322 main.go:141] libmachine: (no-preload-504385) DBG | exit 0
	I0906 20:05:06.148816   72322 main.go:141] libmachine: (no-preload-504385) DBG | SSH cmd err, output: <nil>: 
	I0906 20:05:06.149196   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetConfigRaw
	I0906 20:05:06.149951   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.152588   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.152970   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.153003   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.153238   72322 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/config.json ...
	I0906 20:05:06.153485   72322 machine.go:93] provisionDockerMachine start ...
	I0906 20:05:06.153508   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:06.153714   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.156031   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156394   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.156425   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156562   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.156732   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.156901   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.157051   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.157205   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.157411   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.157425   72322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:05:06.261544   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:05:06.261586   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.261861   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:05:06.261895   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.262063   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.264812   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265192   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.265219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265400   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.265570   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265705   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265856   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.265990   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.266145   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.266157   72322 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-504385 && echo "no-preload-504385" | sudo tee /etc/hostname
	I0906 20:05:06.383428   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-504385
	
	I0906 20:05:06.383456   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.386368   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386722   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.386755   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386968   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.387152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387322   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387439   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.387617   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.387817   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.387840   72322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-504385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-504385/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-504385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:05:06.501805   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:05:06.501836   72322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:05:06.501854   72322 buildroot.go:174] setting up certificates
	I0906 20:05:06.501866   72322 provision.go:84] configureAuth start
	I0906 20:05:06.501873   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.502152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.504721   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505086   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.505115   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505250   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.507420   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507765   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.507795   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507940   72322 provision.go:143] copyHostCerts
	I0906 20:05:06.508008   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:05:06.508031   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:05:06.508087   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:05:06.508175   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:05:06.508183   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:05:06.508208   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:05:06.508297   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:05:06.508307   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:05:06.508338   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:05:06.508406   72322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.no-preload-504385 san=[127.0.0.1 192.168.61.184 localhost minikube no-preload-504385]
	I0906 20:05:06.681719   72322 provision.go:177] copyRemoteCerts
	I0906 20:05:06.681786   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:05:06.681810   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.684460   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684779   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.684822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684962   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.685125   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.685258   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.685368   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:06.767422   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:05:06.794881   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0906 20:05:06.821701   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:05:06.848044   72322 provision.go:87] duration metric: took 346.1664ms to configureAuth
	I0906 20:05:06.848075   72322 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:05:06.848271   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:05:06.848348   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.850743   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851037   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.851064   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851226   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.851395   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851549   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851674   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.851791   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.851993   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.852020   72322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:05:07.074619   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:05:07.074643   72322 machine.go:96] duration metric: took 921.143238ms to provisionDockerMachine
	I0906 20:05:07.074654   72322 start.go:293] postStartSetup for "no-preload-504385" (driver="kvm2")
	I0906 20:05:07.074664   72322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:05:07.074678   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.075017   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:05:07.075042   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.077988   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078268   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.078287   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078449   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.078634   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.078791   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.078946   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.165046   72322 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:05:07.169539   72322 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:05:07.169565   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:05:07.169631   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:05:07.169700   72322 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:05:07.169783   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:05:07.179344   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:07.204213   72322 start.go:296] duration metric: took 129.545341ms for postStartSetup
	I0906 20:05:07.204265   72322 fix.go:56] duration metric: took 19.41036755s for fixHost
	I0906 20:05:07.204287   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.207087   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207473   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.207513   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207695   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.207905   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208090   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208267   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.208436   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:07.208640   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:07.208655   72322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:05:07.314172   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653107.281354639
	
	I0906 20:05:07.314195   72322 fix.go:216] guest clock: 1725653107.281354639
	I0906 20:05:07.314205   72322 fix.go:229] Guest: 2024-09-06 20:05:07.281354639 +0000 UTC Remote: 2024-09-06 20:05:07.204269406 +0000 UTC m=+358.676673749 (delta=77.085233ms)
	I0906 20:05:07.314228   72322 fix.go:200] guest clock delta is within tolerance: 77.085233ms
	I0906 20:05:07.314237   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 19.52037381s
	I0906 20:05:07.314266   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.314552   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:07.317476   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.317839   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.317873   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.318003   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318542   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318716   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318821   72322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:05:07.318876   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.318991   72322 ssh_runner.go:195] Run: cat /version.json
	I0906 20:05:07.319018   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.321880   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322102   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322308   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322340   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322472   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322508   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322550   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322685   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.322713   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322868   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.322875   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.323062   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.323066   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.323221   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.424438   72322 ssh_runner.go:195] Run: systemctl --version
	I0906 20:05:07.430755   72322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:05:07.579436   72322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:05:07.585425   72322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:05:07.585493   72322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:05:07.601437   72322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:05:07.601462   72322 start.go:495] detecting cgroup driver to use...
	I0906 20:05:07.601529   72322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:05:07.620368   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:05:07.634848   72322 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:05:07.634912   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:05:07.648810   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:05:07.664084   72322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:05:07.796601   72322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:05:07.974836   72322 docker.go:233] disabling docker service ...
	I0906 20:05:07.974911   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:05:07.989013   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:05:08.002272   72322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:05:08.121115   72322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:05:08.247908   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:05:08.262855   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:05:08.281662   72322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:05:08.281730   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.292088   72322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:05:08.292165   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.302601   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.313143   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.323852   72322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:05:08.335791   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.347619   72322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.365940   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.376124   72322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:05:08.385677   72322 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:05:08.385743   72322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:05:08.398445   72322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:05:08.408477   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:08.518447   72322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:05:08.613636   72322 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:05:08.613707   72322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:05:08.619050   72322 start.go:563] Will wait 60s for crictl version
	I0906 20:05:08.619134   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:08.622959   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:05:08.668229   72322 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:05:08.668297   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.702416   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.733283   72322 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:05:05.707351   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.206573   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.707452   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.206554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.706854   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.206925   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.707456   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.207200   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.706741   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:10.206605   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.839381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.839918   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.157889   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:11.158761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:08.734700   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:08.737126   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737477   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:08.737504   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737692   72322 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0906 20:05:08.741940   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:08.756235   72322 kubeadm.go:883] updating cluster {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:05:08.756380   72322 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:05:08.756426   72322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:05:08.798359   72322 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:05:08.798388   72322 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:05:08.798484   72322 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.798507   72322 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.798520   72322 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0906 20:05:08.798559   72322 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.798512   72322 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.798571   72322 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.798494   72322 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.798489   72322 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800044   72322 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.800055   72322 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800048   72322 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0906 20:05:08.800067   72322 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.800070   72322 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.800043   72322 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.800046   72322 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.800050   72322 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.960723   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.967887   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.980496   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.988288   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.990844   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.000220   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.031002   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0906 20:05:09.046388   72322 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0906 20:05:09.046430   72322 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.046471   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.079069   72322 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0906 20:05:09.079112   72322 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.079161   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147423   72322 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0906 20:05:09.147470   72322 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.147521   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147529   72322 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0906 20:05:09.147549   72322 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.147584   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153575   72322 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0906 20:05:09.153612   72322 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.153659   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153662   72322 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0906 20:05:09.153697   72322 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.153736   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.272296   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.272317   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.272325   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.272368   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.272398   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.272474   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.397590   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.398793   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.398807   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.398899   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.398912   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.398969   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.515664   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.529550   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.529604   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.529762   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.532314   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.532385   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.603138   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.654698   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0906 20:05:09.654823   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:09.671020   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0906 20:05:09.671069   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0906 20:05:09.671123   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0906 20:05:09.671156   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:09.671128   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.671208   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:09.686883   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0906 20:05:09.687013   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:09.709594   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0906 20:05:09.709706   72322 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0906 20:05:09.709758   72322 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.709858   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0906 20:05:09.709877   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709868   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.709940   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0906 20:05:09.709906   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709994   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0906 20:05:09.709771   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0906 20:05:09.709973   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0906 20:05:09.709721   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:09.714755   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0906 20:05:12.389459   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.679458658s)
	I0906 20:05:12.389498   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0906 20:05:12.389522   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389524   72322 ssh_runner.go:235] Completed: which crictl: (2.679596804s)
	I0906 20:05:12.389573   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389582   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:10.706506   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.207411   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.707316   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.207239   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.706502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.206560   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.706593   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.207192   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.706940   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:15.207250   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.338753   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.339694   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.839193   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:13.656815   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.156988   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.349906   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.960304583s)
	I0906 20:05:14.349962   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960364149s)
	I0906 20:05:14.349988   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:14.350001   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0906 20:05:14.350032   72322 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.350085   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.397740   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:16.430883   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.03310928s)
	I0906 20:05:16.430943   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 20:05:16.430977   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080869318s)
	I0906 20:05:16.431004   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0906 20:05:16.431042   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:16.431042   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:16.431103   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:18.293255   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.862123731s)
	I0906 20:05:18.293274   72322 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.862211647s)
	I0906 20:05:18.293294   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0906 20:05:18.293315   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0906 20:05:18.293324   72322 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:18.293372   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:15.706728   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.207477   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.707337   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.206710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.707209   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.206544   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.707104   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.206752   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.706561   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:20.206507   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.840176   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.339033   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:18.657074   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.157488   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:19.142756   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 20:05:19.142784   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:19.142824   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:20.494611   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351756729s)
	I0906 20:05:20.494642   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0906 20:05:20.494656   72322 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.494706   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.706855   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.206585   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.706948   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.207150   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.706508   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.207459   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.706894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.206643   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.707208   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:25.206797   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.838561   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:25.838697   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:23.656303   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:26.156813   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:24.186953   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.692203906s)
	I0906 20:05:24.186987   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0906 20:05:24.187019   72322 cache_images.go:123] Successfully loaded all cached images
	I0906 20:05:24.187026   72322 cache_images.go:92] duration metric: took 15.388623154s to LoadCachedImages
	I0906 20:05:24.187040   72322 kubeadm.go:934] updating node { 192.168.61.184 8443 v1.31.0 crio true true} ...
	I0906 20:05:24.187169   72322 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-504385 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:05:24.187251   72322 ssh_runner.go:195] Run: crio config
	I0906 20:05:24.236699   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:24.236722   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:24.236746   72322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:05:24.236770   72322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.184 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-504385 NodeName:no-preload-504385 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:05:24.236943   72322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-504385"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:05:24.237005   72322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:05:24.247480   72322 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:05:24.247554   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:05:24.257088   72322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0906 20:05:24.274447   72322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:05:24.292414   72322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0906 20:05:24.310990   72322 ssh_runner.go:195] Run: grep 192.168.61.184	control-plane.minikube.internal$ /etc/hosts
	I0906 20:05:24.315481   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:24.327268   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:24.465318   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:05:24.482195   72322 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385 for IP: 192.168.61.184
	I0906 20:05:24.482216   72322 certs.go:194] generating shared ca certs ...
	I0906 20:05:24.482230   72322 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:05:24.482364   72322 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:05:24.482407   72322 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:05:24.482420   72322 certs.go:256] generating profile certs ...
	I0906 20:05:24.482522   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/client.key
	I0906 20:05:24.482603   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key.9c78613e
	I0906 20:05:24.482664   72322 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key
	I0906 20:05:24.482828   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:05:24.482878   72322 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:05:24.482894   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:05:24.482927   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:05:24.482956   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:05:24.482992   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:05:24.483043   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:24.483686   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:05:24.528742   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:05:24.561921   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:05:24.596162   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:05:24.636490   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 20:05:24.664450   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:05:24.690551   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:05:24.717308   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:05:24.741498   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:05:24.764388   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:05:24.789473   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:05:24.814772   72322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:05:24.833405   72322 ssh_runner.go:195] Run: openssl version
	I0906 20:05:24.841007   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:05:24.852635   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857351   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857404   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.863435   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:05:24.874059   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:05:24.884939   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889474   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889567   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.895161   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:05:24.905629   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:05:24.916101   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920494   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920550   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.925973   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:05:24.937017   72322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:05:24.941834   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:05:24.947779   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:05:24.954042   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:05:24.959977   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:05:24.965500   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:05:24.970996   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:05:24.976532   72322 kubeadm.go:392] StartCluster: {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:05:24.976606   72322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:05:24.976667   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.015556   72322 cri.go:89] found id: ""
	I0906 20:05:25.015653   72322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:05:25.032921   72322 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:05:25.032954   72322 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:05:25.033009   72322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:05:25.044039   72322 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:05:25.045560   72322 kubeconfig.go:125] found "no-preload-504385" server: "https://192.168.61.184:8443"
	I0906 20:05:25.049085   72322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:05:25.059027   72322 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.184
	I0906 20:05:25.059060   72322 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:05:25.059073   72322 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:05:25.059128   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.096382   72322 cri.go:89] found id: ""
	I0906 20:05:25.096446   72322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:05:25.114296   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:05:25.126150   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:05:25.126168   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:05:25.126207   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:05:25.136896   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:05:25.136964   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:05:25.148074   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:05:25.158968   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:05:25.159027   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:05:25.169642   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.179183   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:05:25.179258   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.189449   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:05:25.199237   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:05:25.199286   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:05:25.209663   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:05:25.220511   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:25.336312   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.475543   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.139195419s)
	I0906 20:05:26.475586   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.700018   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.768678   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.901831   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:05:26.901928   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.401987   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.903023   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.957637   72322 api_server.go:72] duration metric: took 1.055807s to wait for apiserver process to appear ...
	I0906 20:05:27.957664   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:05:27.957684   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:27.958196   72322 api_server.go:269] stopped: https://192.168.61.184:8443/healthz: Get "https://192.168.61.184:8443/healthz": dial tcp 192.168.61.184:8443: connect: connection refused
	I0906 20:05:28.458421   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:25.706669   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.206691   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.707336   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.206666   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.706715   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.206488   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.706489   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.207461   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.707293   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:30.206591   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.840001   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:29.840101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.768451   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:05:30.768482   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:05:30.768505   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.868390   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.868430   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:30.958611   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.964946   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.964977   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.458125   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.462130   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.462155   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.958761   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.963320   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.963347   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:32.458596   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:32.464885   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:05:32.474582   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:05:32.474616   72322 api_server.go:131] duration metric: took 4.51694462s to wait for apiserver health ...
	I0906 20:05:32.474627   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:32.474635   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:32.476583   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:05:28.157326   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.657628   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:32.477797   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:05:32.490715   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:05:32.510816   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:05:32.529192   72322 system_pods.go:59] 8 kube-system pods found
	I0906 20:05:32.529236   72322 system_pods.go:61] "coredns-6f6b679f8f-s7tnx" [ce438653-a3b9-4412-8705-7d2db7df5d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:05:32.529254   72322 system_pods.go:61] "etcd-no-preload-504385" [6ec6b2a1-c22a-44b4-b726-808a56f2be2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:05:32.529266   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [5f2baa0b-3cf3-4e0d-984b-80fa19adb3b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:05:32.529275   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [59ffbd51-6a83-43e6-8ef7-bc1cfd80b4d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:05:32.529292   72322 system_pods.go:61] "kube-proxy-dg8sg" [2e0393f3-b9bd-4603-b800-e1a2fdbf71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:05:32.529300   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [52a74c91-a6ec-4d64-8651-e1f87db21b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:05:32.529306   72322 system_pods.go:61] "metrics-server-6867b74b74-nn295" [9d0f51d1-7abf-4f63-bef7-c02f6cd89c5d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:05:32.529313   72322 system_pods.go:61] "storage-provisioner" [69ed0066-2b84-4a4d-91e5-1e25bb3f31eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:05:32.529320   72322 system_pods.go:74] duration metric: took 18.48107ms to wait for pod list to return data ...
	I0906 20:05:32.529333   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:05:32.535331   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:05:32.535363   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:05:32.535376   72322 node_conditions.go:105] duration metric: took 6.037772ms to run NodePressure ...
	I0906 20:05:32.535397   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:32.955327   72322 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962739   72322 kubeadm.go:739] kubelet initialised
	I0906 20:05:32.962767   72322 kubeadm.go:740] duration metric: took 7.415054ms waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962776   72322 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:05:32.980280   72322 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:30.707091   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.207070   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.707224   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.207295   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.707195   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.207373   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.707519   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.207428   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.706808   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:35.207396   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.340006   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.838636   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:36.838703   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:33.155769   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.156761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.994689   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.487610   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.707415   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.206955   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.706868   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.206515   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.706659   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.206735   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.706915   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.207300   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.707211   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:40.207085   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.839362   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:41.338875   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.657190   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.158940   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:39.986557   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.486518   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.706720   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.206896   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.707281   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.206751   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.706754   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.206987   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.707245   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.207502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.707112   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:45.206569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.339353   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.838975   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.657187   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.156196   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:47.157014   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:43.986675   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.986701   72322 pod_ready.go:82] duration metric: took 11.006397745s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.986710   72322 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991650   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.991671   72322 pod_ready.go:82] duration metric: took 4.955425ms for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991680   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997218   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:44.997242   72322 pod_ready.go:82] duration metric: took 1.005553613s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997253   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002155   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.002177   72322 pod_ready.go:82] duration metric: took 4.916677ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002186   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006610   72322 pod_ready.go:93] pod "kube-proxy-dg8sg" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.006631   72322 pod_ready.go:82] duration metric: took 4.439092ms for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006639   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185114   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.185139   72322 pod_ready.go:82] duration metric: took 178.494249ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185149   72322 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:47.191676   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.707450   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.207446   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.707006   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.206484   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.707168   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.207536   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.707554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.206894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.706709   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:50.206799   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.338355   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.839372   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.157301   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.157426   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.193619   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.692286   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.707012   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.206914   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.706917   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.207465   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.706682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.206565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.706757   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.206600   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.706926   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:55.207382   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.338845   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.339570   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:53.656904   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.158806   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:54.191331   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.192498   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.707103   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.206621   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.707156   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.207277   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.706568   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:58.206599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:05:58.206698   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:05:58.245828   73230 cri.go:89] found id: ""
	I0906 20:05:58.245857   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.245868   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:05:58.245875   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:05:58.245938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:05:58.283189   73230 cri.go:89] found id: ""
	I0906 20:05:58.283217   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.283228   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:05:58.283235   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:05:58.283303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:05:58.320834   73230 cri.go:89] found id: ""
	I0906 20:05:58.320868   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.320880   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:05:58.320889   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:05:58.320944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:05:58.356126   73230 cri.go:89] found id: ""
	I0906 20:05:58.356152   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.356162   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:05:58.356169   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:05:58.356227   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:05:58.395951   73230 cri.go:89] found id: ""
	I0906 20:05:58.395977   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.395987   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:05:58.395994   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:05:58.396061   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:05:58.431389   73230 cri.go:89] found id: ""
	I0906 20:05:58.431415   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.431426   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:05:58.431433   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:05:58.431511   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:05:58.466255   73230 cri.go:89] found id: ""
	I0906 20:05:58.466285   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.466294   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:05:58.466300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:05:58.466356   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:05:58.505963   73230 cri.go:89] found id: ""
	I0906 20:05:58.505989   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.505997   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:05:58.506006   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:05:58.506018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:05:58.579027   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:05:58.579061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:05:58.620332   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:05:58.620365   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:05:58.675017   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:05:58.675052   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:05:58.689944   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:05:58.689970   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:05:58.825396   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:05:57.838610   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.339329   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.656312   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.656996   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.691099   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.692040   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.192516   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:01.326375   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:01.340508   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:01.340570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:01.375429   73230 cri.go:89] found id: ""
	I0906 20:06:01.375460   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.375470   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:01.375478   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:01.375539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:01.410981   73230 cri.go:89] found id: ""
	I0906 20:06:01.411008   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.411019   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:01.411026   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:01.411083   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:01.448925   73230 cri.go:89] found id: ""
	I0906 20:06:01.448957   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.448968   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:01.448975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:01.449040   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:01.492063   73230 cri.go:89] found id: ""
	I0906 20:06:01.492094   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.492104   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:01.492112   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:01.492181   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:01.557779   73230 cri.go:89] found id: ""
	I0906 20:06:01.557812   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.557823   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:01.557830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:01.557892   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:01.604397   73230 cri.go:89] found id: ""
	I0906 20:06:01.604424   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.604432   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:01.604437   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:01.604482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:01.642249   73230 cri.go:89] found id: ""
	I0906 20:06:01.642280   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.642292   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:01.642300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:01.642364   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:01.692434   73230 cri.go:89] found id: ""
	I0906 20:06:01.692462   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.692474   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:01.692483   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:01.692498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:01.705860   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:01.705884   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:01.783929   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:01.783954   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:01.783965   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:01.864347   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:01.864385   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:01.902284   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:01.902311   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:04.456090   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:04.469775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:04.469840   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:04.505742   73230 cri.go:89] found id: ""
	I0906 20:06:04.505769   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.505778   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:04.505783   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:04.505835   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:04.541787   73230 cri.go:89] found id: ""
	I0906 20:06:04.541811   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.541819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:04.541824   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:04.541874   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:04.578775   73230 cri.go:89] found id: ""
	I0906 20:06:04.578806   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.578817   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:04.578825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:04.578885   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:04.614505   73230 cri.go:89] found id: ""
	I0906 20:06:04.614533   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.614542   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:04.614548   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:04.614594   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:04.652988   73230 cri.go:89] found id: ""
	I0906 20:06:04.653016   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.653027   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:04.653035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:04.653104   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:04.692380   73230 cri.go:89] found id: ""
	I0906 20:06:04.692408   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.692416   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:04.692423   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:04.692478   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:04.729846   73230 cri.go:89] found id: ""
	I0906 20:06:04.729869   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.729880   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:04.729887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:04.729953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:04.766341   73230 cri.go:89] found id: ""
	I0906 20:06:04.766370   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.766379   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:04.766390   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:04.766405   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:04.779801   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:04.779828   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:04.855313   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:04.855334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:04.855346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:04.934210   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:04.934246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:04.975589   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:04.975621   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:02.839427   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:04.840404   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.158048   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.655510   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.192558   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.692755   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.528622   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:07.544085   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:07.544156   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:07.588106   73230 cri.go:89] found id: ""
	I0906 20:06:07.588139   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.588149   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:07.588157   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:07.588210   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:07.630440   73230 cri.go:89] found id: ""
	I0906 20:06:07.630476   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.630494   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:07.630500   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:07.630551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:07.668826   73230 cri.go:89] found id: ""
	I0906 20:06:07.668870   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.668889   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:07.668898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:07.668962   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:07.706091   73230 cri.go:89] found id: ""
	I0906 20:06:07.706118   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.706130   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:07.706138   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:07.706196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:07.741679   73230 cri.go:89] found id: ""
	I0906 20:06:07.741708   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.741719   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:07.741726   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:07.741792   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:07.778240   73230 cri.go:89] found id: ""
	I0906 20:06:07.778277   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.778288   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:07.778296   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:07.778352   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:07.813183   73230 cri.go:89] found id: ""
	I0906 20:06:07.813212   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.813224   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:07.813232   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:07.813294   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:07.853938   73230 cri.go:89] found id: ""
	I0906 20:06:07.853970   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.853980   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:07.853988   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:07.854001   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:07.893540   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:07.893567   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:07.944219   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:07.944262   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:07.959601   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:07.959635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:08.034487   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:08.034513   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:08.034529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:07.339634   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:09.838953   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.658315   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.157980   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.192738   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.691823   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.611413   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:10.625273   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:10.625353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:10.664568   73230 cri.go:89] found id: ""
	I0906 20:06:10.664597   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.664609   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:10.664617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:10.664680   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:10.702743   73230 cri.go:89] found id: ""
	I0906 20:06:10.702772   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.702783   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:10.702790   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:10.702850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:10.739462   73230 cri.go:89] found id: ""
	I0906 20:06:10.739487   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.739504   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:10.739511   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:10.739572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:10.776316   73230 cri.go:89] found id: ""
	I0906 20:06:10.776344   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.776355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:10.776362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:10.776420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:10.809407   73230 cri.go:89] found id: ""
	I0906 20:06:10.809440   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.809451   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:10.809459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:10.809519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:10.844736   73230 cri.go:89] found id: ""
	I0906 20:06:10.844765   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.844777   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:10.844784   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:10.844851   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:10.880658   73230 cri.go:89] found id: ""
	I0906 20:06:10.880685   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.880693   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:10.880698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:10.880753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:10.917032   73230 cri.go:89] found id: ""
	I0906 20:06:10.917063   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.917074   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:10.917085   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:10.917100   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:10.980241   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:10.980272   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:10.995389   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:10.995435   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:11.070285   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:11.070313   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:11.070328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:11.155574   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:11.155607   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:13.703712   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:13.718035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:13.718093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:13.753578   73230 cri.go:89] found id: ""
	I0906 20:06:13.753603   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.753611   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:13.753617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:13.753659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:13.790652   73230 cri.go:89] found id: ""
	I0906 20:06:13.790681   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.790691   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:13.790697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:13.790749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:13.824243   73230 cri.go:89] found id: ""
	I0906 20:06:13.824278   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.824288   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:13.824293   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:13.824342   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:13.859647   73230 cri.go:89] found id: ""
	I0906 20:06:13.859691   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.859702   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:13.859721   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:13.859781   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:13.897026   73230 cri.go:89] found id: ""
	I0906 20:06:13.897061   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.897068   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:13.897075   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:13.897131   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:13.933904   73230 cri.go:89] found id: ""
	I0906 20:06:13.933927   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.933935   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:13.933941   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:13.933986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:13.969168   73230 cri.go:89] found id: ""
	I0906 20:06:13.969198   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.969210   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:13.969218   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:13.969295   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:14.005808   73230 cri.go:89] found id: ""
	I0906 20:06:14.005838   73230 logs.go:276] 0 containers: []
	W0906 20:06:14.005849   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:14.005862   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:14.005878   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:14.060878   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:14.060915   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:14.075388   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:14.075414   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:14.144942   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:14.144966   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:14.144981   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:14.233088   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:14.233139   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:12.338579   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.839062   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.655992   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.657020   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.157119   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.692103   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.193196   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:16.776744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:16.790292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:16.790384   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:16.828877   73230 cri.go:89] found id: ""
	I0906 20:06:16.828910   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.828921   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:16.828929   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:16.829016   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:16.864413   73230 cri.go:89] found id: ""
	I0906 20:06:16.864440   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.864449   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:16.864455   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:16.864525   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:16.908642   73230 cri.go:89] found id: ""
	I0906 20:06:16.908676   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.908687   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:16.908694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:16.908748   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:16.952247   73230 cri.go:89] found id: ""
	I0906 20:06:16.952278   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.952286   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:16.952292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:16.952343   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:16.990986   73230 cri.go:89] found id: ""
	I0906 20:06:16.991013   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.991022   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:16.991028   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:16.991077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:17.031002   73230 cri.go:89] found id: ""
	I0906 20:06:17.031034   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.031045   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:17.031052   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:17.031114   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:17.077533   73230 cri.go:89] found id: ""
	I0906 20:06:17.077560   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.077572   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:17.077579   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:17.077646   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:17.116770   73230 cri.go:89] found id: ""
	I0906 20:06:17.116798   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.116806   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:17.116817   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:17.116834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.169300   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:17.169337   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:17.184266   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:17.184299   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:17.266371   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:17.266400   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:17.266419   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:17.343669   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:17.343698   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:19.886541   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:19.899891   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:19.899951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:19.946592   73230 cri.go:89] found id: ""
	I0906 20:06:19.946621   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.946630   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:19.946636   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:19.946686   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:19.981758   73230 cri.go:89] found id: ""
	I0906 20:06:19.981788   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.981797   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:19.981802   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:19.981854   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:20.018372   73230 cri.go:89] found id: ""
	I0906 20:06:20.018397   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.018405   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:20.018411   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:20.018460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:20.054380   73230 cri.go:89] found id: ""
	I0906 20:06:20.054428   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.054440   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:20.054449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:20.054521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:20.092343   73230 cri.go:89] found id: ""
	I0906 20:06:20.092376   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.092387   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:20.092395   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:20.092463   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:20.128568   73230 cri.go:89] found id: ""
	I0906 20:06:20.128594   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.128604   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:20.128610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:20.128657   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:20.166018   73230 cri.go:89] found id: ""
	I0906 20:06:20.166046   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.166057   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:20.166072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:20.166125   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:20.203319   73230 cri.go:89] found id: ""
	I0906 20:06:20.203347   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.203355   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:20.203365   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:20.203381   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:20.287217   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:20.287243   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:20.287259   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:20.372799   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:20.372834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:20.416595   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:20.416620   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.338546   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.342409   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.838689   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.657411   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:22.157972   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.691327   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.692066   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:20.468340   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:20.468378   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:22.983259   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:22.997014   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:22.997098   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:23.034483   73230 cri.go:89] found id: ""
	I0906 20:06:23.034513   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.034524   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:23.034531   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:23.034597   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:23.072829   73230 cri.go:89] found id: ""
	I0906 20:06:23.072867   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.072878   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:23.072885   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:23.072949   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:23.110574   73230 cri.go:89] found id: ""
	I0906 20:06:23.110602   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.110613   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:23.110620   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:23.110684   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:23.149506   73230 cri.go:89] found id: ""
	I0906 20:06:23.149538   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.149550   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:23.149557   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:23.149619   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:23.191321   73230 cri.go:89] found id: ""
	I0906 20:06:23.191355   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.191367   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:23.191374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:23.191441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:23.233737   73230 cri.go:89] found id: ""
	I0906 20:06:23.233770   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.233791   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:23.233800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:23.233873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:23.270013   73230 cri.go:89] found id: ""
	I0906 20:06:23.270048   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.270060   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:23.270068   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:23.270127   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:23.309517   73230 cri.go:89] found id: ""
	I0906 20:06:23.309541   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.309549   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:23.309566   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:23.309578   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:23.380645   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:23.380675   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:23.380690   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:23.463656   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:23.463696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:23.504100   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:23.504134   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:23.557438   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:23.557483   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:23.841101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.340722   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.658261   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:27.155171   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.193829   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.690602   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.074045   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:26.088006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:26.088072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:26.124445   73230 cri.go:89] found id: ""
	I0906 20:06:26.124469   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.124476   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:26.124482   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:26.124537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:26.158931   73230 cri.go:89] found id: ""
	I0906 20:06:26.158957   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.158968   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:26.158975   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:26.159035   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:26.197125   73230 cri.go:89] found id: ""
	I0906 20:06:26.197154   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.197164   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:26.197171   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:26.197234   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:26.233241   73230 cri.go:89] found id: ""
	I0906 20:06:26.233278   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.233291   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:26.233300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:26.233366   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:26.269910   73230 cri.go:89] found id: ""
	I0906 20:06:26.269943   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.269955   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:26.269962   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:26.270026   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:26.308406   73230 cri.go:89] found id: ""
	I0906 20:06:26.308439   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.308450   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:26.308459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:26.308521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:26.344248   73230 cri.go:89] found id: ""
	I0906 20:06:26.344276   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.344288   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:26.344295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:26.344353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:26.391794   73230 cri.go:89] found id: ""
	I0906 20:06:26.391827   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.391840   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:26.391851   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:26.391866   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:26.444192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:26.444231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:26.459113   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:26.459144   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:26.533920   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:26.533945   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:26.533960   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:26.616382   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:26.616416   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:29.160429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:29.175007   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:29.175063   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:29.212929   73230 cri.go:89] found id: ""
	I0906 20:06:29.212961   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.212972   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:29.212980   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:29.213042   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:29.250777   73230 cri.go:89] found id: ""
	I0906 20:06:29.250806   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.250815   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:29.250821   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:29.250870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:29.292222   73230 cri.go:89] found id: ""
	I0906 20:06:29.292253   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.292262   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:29.292268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:29.292331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:29.328379   73230 cri.go:89] found id: ""
	I0906 20:06:29.328413   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.328431   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:29.328436   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:29.328482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:29.366792   73230 cri.go:89] found id: ""
	I0906 20:06:29.366822   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.366834   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:29.366841   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:29.366903   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:29.402233   73230 cri.go:89] found id: ""
	I0906 20:06:29.402261   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.402270   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:29.402276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:29.402331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:29.436695   73230 cri.go:89] found id: ""
	I0906 20:06:29.436724   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.436731   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:29.436736   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:29.436787   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:29.473050   73230 cri.go:89] found id: ""
	I0906 20:06:29.473074   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.473082   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:29.473091   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:29.473101   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:29.524981   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:29.525018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:29.538698   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:29.538722   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:29.611026   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:29.611049   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:29.611064   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:29.686898   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:29.686931   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:28.839118   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:30.839532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:29.156985   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.656552   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:28.694188   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.191032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.192623   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:32.228399   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:32.244709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:32.244775   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:32.285681   73230 cri.go:89] found id: ""
	I0906 20:06:32.285713   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.285724   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:32.285732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:32.285794   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:32.325312   73230 cri.go:89] found id: ""
	I0906 20:06:32.325340   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.325349   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:32.325355   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:32.325400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:32.361420   73230 cri.go:89] found id: ""
	I0906 20:06:32.361455   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.361468   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:32.361477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:32.361543   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:32.398881   73230 cri.go:89] found id: ""
	I0906 20:06:32.398956   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.398971   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:32.398984   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:32.399041   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:32.435336   73230 cri.go:89] found id: ""
	I0906 20:06:32.435362   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.435370   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:32.435375   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:32.435427   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:32.472849   73230 cri.go:89] found id: ""
	I0906 20:06:32.472900   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.472909   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:32.472914   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:32.472964   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:32.508176   73230 cri.go:89] found id: ""
	I0906 20:06:32.508199   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.508208   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:32.508213   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:32.508271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:32.550519   73230 cri.go:89] found id: ""
	I0906 20:06:32.550550   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.550561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:32.550576   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:32.550593   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:32.601362   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:32.601394   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:32.614821   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:32.614849   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:32.686044   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:32.686061   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:32.686074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:32.767706   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:32.767744   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:35.309159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:35.322386   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:35.322462   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:35.362909   73230 cri.go:89] found id: ""
	I0906 20:06:35.362937   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.362948   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:35.362955   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:35.363017   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:35.400591   73230 cri.go:89] found id: ""
	I0906 20:06:35.400621   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.400629   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:35.400635   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:35.400682   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:35.436547   73230 cri.go:89] found id: ""
	I0906 20:06:35.436578   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.436589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:35.436596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:35.436666   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:33.338812   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.340154   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.656782   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.657043   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.691312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:37.691358   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.473130   73230 cri.go:89] found id: ""
	I0906 20:06:35.473155   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.473163   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:35.473168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:35.473244   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:35.509646   73230 cri.go:89] found id: ""
	I0906 20:06:35.509677   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.509687   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:35.509695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:35.509754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:35.547651   73230 cri.go:89] found id: ""
	I0906 20:06:35.547684   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.547696   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:35.547703   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:35.547761   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:35.608590   73230 cri.go:89] found id: ""
	I0906 20:06:35.608614   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.608624   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:35.608631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:35.608691   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:35.651508   73230 cri.go:89] found id: ""
	I0906 20:06:35.651550   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.651561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:35.651572   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:35.651585   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:35.705502   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:35.705542   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:35.719550   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:35.719577   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:35.791435   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:35.791461   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:35.791476   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:35.869018   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:35.869070   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:38.411587   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:38.425739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:38.425800   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:38.463534   73230 cri.go:89] found id: ""
	I0906 20:06:38.463560   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.463571   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:38.463578   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:38.463628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:38.499238   73230 cri.go:89] found id: ""
	I0906 20:06:38.499269   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.499280   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:38.499287   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:38.499340   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:38.536297   73230 cri.go:89] found id: ""
	I0906 20:06:38.536334   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.536345   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:38.536352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:38.536417   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:38.573672   73230 cri.go:89] found id: ""
	I0906 20:06:38.573701   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.573712   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:38.573720   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:38.573779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:38.610913   73230 cri.go:89] found id: ""
	I0906 20:06:38.610937   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.610945   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:38.610950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:38.610996   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:38.647335   73230 cri.go:89] found id: ""
	I0906 20:06:38.647359   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.647368   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:38.647374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:38.647418   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:38.684054   73230 cri.go:89] found id: ""
	I0906 20:06:38.684084   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.684097   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:38.684106   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:38.684174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:38.731134   73230 cri.go:89] found id: ""
	I0906 20:06:38.731161   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.731173   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:38.731183   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:38.731199   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:38.787757   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:38.787798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:38.802920   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:38.802955   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:38.889219   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:38.889246   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:38.889261   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:38.964999   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:38.965042   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:37.838886   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.338914   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:38.156615   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.656577   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:39.691609   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.692330   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.504406   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:41.518111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:41.518169   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:41.558701   73230 cri.go:89] found id: ""
	I0906 20:06:41.558727   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.558738   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:41.558746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:41.558807   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:41.595986   73230 cri.go:89] found id: ""
	I0906 20:06:41.596009   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.596017   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:41.596023   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:41.596070   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:41.631462   73230 cri.go:89] found id: ""
	I0906 20:06:41.631486   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.631494   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:41.631504   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:41.631559   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:41.669646   73230 cri.go:89] found id: ""
	I0906 20:06:41.669674   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.669686   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:41.669693   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:41.669754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:41.708359   73230 cri.go:89] found id: ""
	I0906 20:06:41.708383   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.708391   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:41.708398   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:41.708446   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:41.745712   73230 cri.go:89] found id: ""
	I0906 20:06:41.745737   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.745750   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:41.745756   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:41.745804   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:41.781862   73230 cri.go:89] found id: ""
	I0906 20:06:41.781883   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.781892   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:41.781898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:41.781946   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:41.816687   73230 cri.go:89] found id: ""
	I0906 20:06:41.816714   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.816722   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:41.816730   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:41.816742   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:41.830115   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:41.830145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:41.908303   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:41.908334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:41.908348   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:42.001459   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:42.001501   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:42.061341   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:42.061368   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:44.619574   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:44.633355   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:44.633423   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:44.668802   73230 cri.go:89] found id: ""
	I0906 20:06:44.668834   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.668845   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:44.668852   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:44.668924   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:44.707613   73230 cri.go:89] found id: ""
	I0906 20:06:44.707639   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.707650   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:44.707657   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:44.707727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:44.744202   73230 cri.go:89] found id: ""
	I0906 20:06:44.744231   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.744243   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:44.744250   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:44.744311   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:44.783850   73230 cri.go:89] found id: ""
	I0906 20:06:44.783873   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.783881   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:44.783886   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:44.783938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:44.824986   73230 cri.go:89] found id: ""
	I0906 20:06:44.825011   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.825019   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:44.825025   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:44.825073   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:44.865157   73230 cri.go:89] found id: ""
	I0906 20:06:44.865182   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.865190   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:44.865196   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:44.865258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:44.908268   73230 cri.go:89] found id: ""
	I0906 20:06:44.908295   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.908305   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:44.908312   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:44.908359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:44.948669   73230 cri.go:89] found id: ""
	I0906 20:06:44.948697   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.948706   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:44.948716   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:44.948731   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:44.961862   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:44.961887   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:45.036756   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:45.036783   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:45.036801   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:45.116679   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:45.116717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:45.159756   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:45.159784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:42.339271   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.839443   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:43.155878   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:45.158884   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.192211   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:46.692140   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.714682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:47.730754   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:47.730820   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:47.783208   73230 cri.go:89] found id: ""
	I0906 20:06:47.783239   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.783249   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:47.783255   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:47.783312   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:47.844291   73230 cri.go:89] found id: ""
	I0906 20:06:47.844324   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.844336   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:47.844344   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:47.844407   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:47.881877   73230 cri.go:89] found id: ""
	I0906 20:06:47.881905   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.881913   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:47.881919   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:47.881986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:47.918034   73230 cri.go:89] found id: ""
	I0906 20:06:47.918058   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.918066   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:47.918072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:47.918126   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:47.957045   73230 cri.go:89] found id: ""
	I0906 20:06:47.957068   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.957077   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:47.957083   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:47.957134   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:47.993849   73230 cri.go:89] found id: ""
	I0906 20:06:47.993872   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.993883   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:47.993890   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:47.993951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:48.031214   73230 cri.go:89] found id: ""
	I0906 20:06:48.031239   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.031249   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:48.031257   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:48.031314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:48.064634   73230 cri.go:89] found id: ""
	I0906 20:06:48.064673   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.064690   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:48.064698   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:48.064710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:48.104307   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:48.104343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:48.158869   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:48.158900   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:48.173000   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:48.173026   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:48.248751   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:48.248774   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:48.248792   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:47.339014   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.339656   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.838817   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.656402   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.156349   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:52.156651   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.192411   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.691635   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.833490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:50.847618   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:50.847702   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:50.887141   73230 cri.go:89] found id: ""
	I0906 20:06:50.887167   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.887176   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:50.887181   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:50.887228   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:50.923435   73230 cri.go:89] found id: ""
	I0906 20:06:50.923480   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.923491   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:50.923499   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:50.923567   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:50.959704   73230 cri.go:89] found id: ""
	I0906 20:06:50.959730   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.959742   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:50.959748   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:50.959810   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:50.992994   73230 cri.go:89] found id: ""
	I0906 20:06:50.993023   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.993032   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:50.993037   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:50.993091   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:51.031297   73230 cri.go:89] found id: ""
	I0906 20:06:51.031321   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.031329   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:51.031335   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:51.031390   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:51.067698   73230 cri.go:89] found id: ""
	I0906 20:06:51.067721   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.067732   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:51.067739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:51.067799   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:51.102240   73230 cri.go:89] found id: ""
	I0906 20:06:51.102268   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.102278   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:51.102285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:51.102346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:51.137146   73230 cri.go:89] found id: ""
	I0906 20:06:51.137172   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.137183   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:51.137194   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:51.137209   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:51.216158   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:51.216194   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:51.256063   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:51.256088   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:51.309176   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:51.309210   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:51.323515   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:51.323544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:51.393281   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:53.893714   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:53.907807   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:53.907863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:53.947929   73230 cri.go:89] found id: ""
	I0906 20:06:53.947954   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.947962   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:53.947968   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:53.948014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:53.983005   73230 cri.go:89] found id: ""
	I0906 20:06:53.983028   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.983041   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:53.983046   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:53.983094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:54.019004   73230 cri.go:89] found id: ""
	I0906 20:06:54.019027   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.019035   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:54.019041   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:54.019094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:54.060240   73230 cri.go:89] found id: ""
	I0906 20:06:54.060266   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.060279   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:54.060285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:54.060336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:54.096432   73230 cri.go:89] found id: ""
	I0906 20:06:54.096461   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.096469   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:54.096475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:54.096537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:54.132992   73230 cri.go:89] found id: ""
	I0906 20:06:54.133021   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.133033   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:54.133040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:54.133103   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:54.172730   73230 cri.go:89] found id: ""
	I0906 20:06:54.172754   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.172766   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:54.172778   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:54.172839   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:54.212050   73230 cri.go:89] found id: ""
	I0906 20:06:54.212191   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.212202   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:54.212212   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:54.212234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:54.263603   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:54.263647   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:54.281291   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:54.281324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:54.359523   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:54.359545   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:54.359568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:54.442230   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:54.442265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:54.339159   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.841459   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.157379   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.656134   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.191878   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.691766   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.983744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:56.997451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:56.997527   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:57.034792   73230 cri.go:89] found id: ""
	I0906 20:06:57.034817   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.034825   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:57.034831   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:57.034883   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:57.073709   73230 cri.go:89] found id: ""
	I0906 20:06:57.073735   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.073745   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:57.073751   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:57.073803   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:57.122758   73230 cri.go:89] found id: ""
	I0906 20:06:57.122787   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.122798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:57.122808   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:57.122865   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:57.158208   73230 cri.go:89] found id: ""
	I0906 20:06:57.158242   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.158252   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:57.158262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:57.158323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:57.194004   73230 cri.go:89] found id: ""
	I0906 20:06:57.194029   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.194037   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:57.194044   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:57.194099   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:57.230068   73230 cri.go:89] found id: ""
	I0906 20:06:57.230099   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.230111   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:57.230119   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:57.230186   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:57.265679   73230 cri.go:89] found id: ""
	I0906 20:06:57.265707   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.265718   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:57.265735   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:57.265801   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:57.304917   73230 cri.go:89] found id: ""
	I0906 20:06:57.304946   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.304956   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:57.304967   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:57.304980   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:57.357238   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:57.357276   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:57.371648   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:57.371674   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:57.438572   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:57.438590   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:57.438602   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:57.528212   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:57.528256   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:00.071140   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:00.084975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:00.085055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:00.119680   73230 cri.go:89] found id: ""
	I0906 20:07:00.119713   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.119725   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:00.119732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:00.119786   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:00.155678   73230 cri.go:89] found id: ""
	I0906 20:07:00.155704   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.155716   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:00.155723   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:00.155769   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:00.190758   73230 cri.go:89] found id: ""
	I0906 20:07:00.190783   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.190793   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:00.190799   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:00.190863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:00.228968   73230 cri.go:89] found id: ""
	I0906 20:07:00.228999   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.229010   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:00.229018   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:00.229079   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:00.265691   73230 cri.go:89] found id: ""
	I0906 20:07:00.265722   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.265733   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:00.265741   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:00.265806   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:00.305785   73230 cri.go:89] found id: ""
	I0906 20:07:00.305812   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.305820   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:00.305825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:00.305872   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:00.341872   73230 cri.go:89] found id: ""
	I0906 20:07:00.341895   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.341902   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:00.341907   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:00.341955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:00.377661   73230 cri.go:89] found id: ""
	I0906 20:07:00.377690   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.377702   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:00.377712   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:00.377725   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:00.428215   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:00.428254   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:00.443135   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:00.443165   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 20:06:59.337996   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.338924   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:58.657236   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.156973   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:59.191556   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.192082   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.193511   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	W0906 20:07:00.518745   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:00.518768   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:00.518781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:00.604413   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:00.604448   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.146657   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:03.160610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:03.160665   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:03.200916   73230 cri.go:89] found id: ""
	I0906 20:07:03.200950   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.200960   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:03.200967   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:03.201029   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:03.239550   73230 cri.go:89] found id: ""
	I0906 20:07:03.239579   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.239592   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:03.239600   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:03.239660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:03.278216   73230 cri.go:89] found id: ""
	I0906 20:07:03.278244   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.278255   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:03.278263   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:03.278325   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:03.315028   73230 cri.go:89] found id: ""
	I0906 20:07:03.315059   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.315073   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:03.315080   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:03.315146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:03.354614   73230 cri.go:89] found id: ""
	I0906 20:07:03.354638   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.354647   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:03.354652   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:03.354710   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:03.390105   73230 cri.go:89] found id: ""
	I0906 20:07:03.390129   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.390138   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:03.390144   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:03.390190   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:03.427651   73230 cri.go:89] found id: ""
	I0906 20:07:03.427679   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.427687   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:03.427695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:03.427763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:03.463191   73230 cri.go:89] found id: ""
	I0906 20:07:03.463220   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.463230   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:03.463242   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:03.463288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:03.476966   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:03.476995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:03.558415   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:03.558441   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:03.558457   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:03.641528   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:03.641564   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.680916   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:03.680943   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:03.339511   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.340113   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.157907   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.160507   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.692151   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:08.191782   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:06.235947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:06.249589   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:06.249667   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:06.289193   73230 cri.go:89] found id: ""
	I0906 20:07:06.289223   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.289235   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:06.289242   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:06.289305   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:06.324847   73230 cri.go:89] found id: ""
	I0906 20:07:06.324887   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.324898   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:06.324904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:06.324966   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:06.361755   73230 cri.go:89] found id: ""
	I0906 20:07:06.361786   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.361798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:06.361806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:06.361873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:06.397739   73230 cri.go:89] found id: ""
	I0906 20:07:06.397766   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.397775   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:06.397780   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:06.397833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:06.432614   73230 cri.go:89] found id: ""
	I0906 20:07:06.432641   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.432649   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:06.432655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:06.432703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:06.467784   73230 cri.go:89] found id: ""
	I0906 20:07:06.467812   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.467823   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:06.467830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:06.467890   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:06.507055   73230 cri.go:89] found id: ""
	I0906 20:07:06.507085   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.507096   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:06.507104   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:06.507165   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:06.544688   73230 cri.go:89] found id: ""
	I0906 20:07:06.544720   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.544730   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:06.544740   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:06.544751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:06.597281   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:06.597314   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:06.612749   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:06.612774   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:06.684973   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:06.684993   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:06.685006   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:06.764306   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:06.764345   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.304340   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:09.317460   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:09.317536   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:09.354289   73230 cri.go:89] found id: ""
	I0906 20:07:09.354312   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.354322   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:09.354327   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:09.354373   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:09.390962   73230 cri.go:89] found id: ""
	I0906 20:07:09.390997   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.391008   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:09.391015   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:09.391076   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:09.427456   73230 cri.go:89] found id: ""
	I0906 20:07:09.427491   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.427502   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:09.427510   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:09.427572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:09.462635   73230 cri.go:89] found id: ""
	I0906 20:07:09.462667   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.462680   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:09.462687   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:09.462749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:09.506726   73230 cri.go:89] found id: ""
	I0906 20:07:09.506751   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.506767   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:09.506775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:09.506836   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:09.541974   73230 cri.go:89] found id: ""
	I0906 20:07:09.541999   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.542009   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:09.542017   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:09.542077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:09.580069   73230 cri.go:89] found id: ""
	I0906 20:07:09.580104   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.580115   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:09.580123   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:09.580182   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:09.616025   73230 cri.go:89] found id: ""
	I0906 20:07:09.616054   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.616065   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:09.616075   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:09.616090   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:09.630967   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:09.630993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:09.716733   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:09.716766   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:09.716782   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:09.792471   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:09.792503   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.832326   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:09.832357   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:07.840909   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.339239   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:07.655710   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:09.656069   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:11.656458   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.192155   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.192716   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.385565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:12.398694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:12.398768   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:12.437446   73230 cri.go:89] found id: ""
	I0906 20:07:12.437473   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.437482   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:12.437487   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:12.437555   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:12.473328   73230 cri.go:89] found id: ""
	I0906 20:07:12.473355   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.473362   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:12.473372   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:12.473429   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:12.510935   73230 cri.go:89] found id: ""
	I0906 20:07:12.510962   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.510972   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:12.510979   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:12.511044   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:12.547961   73230 cri.go:89] found id: ""
	I0906 20:07:12.547991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.547999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:12.548005   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:12.548062   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:12.585257   73230 cri.go:89] found id: ""
	I0906 20:07:12.585291   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.585302   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:12.585309   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:12.585369   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:12.623959   73230 cri.go:89] found id: ""
	I0906 20:07:12.623991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.624003   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:12.624010   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:12.624066   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:12.662795   73230 cri.go:89] found id: ""
	I0906 20:07:12.662822   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.662832   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:12.662840   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:12.662896   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:12.700941   73230 cri.go:89] found id: ""
	I0906 20:07:12.700967   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.700974   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:12.700983   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:12.700994   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:12.785989   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:12.786025   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:12.826678   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:12.826704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:12.881558   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:12.881599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:12.896035   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:12.896065   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:12.970721   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:12.839031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.339615   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:13.656809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.657470   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:14.691032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:16.692697   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.471171   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:15.484466   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:15.484541   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:15.518848   73230 cri.go:89] found id: ""
	I0906 20:07:15.518875   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.518886   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:15.518894   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:15.518953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:15.553444   73230 cri.go:89] found id: ""
	I0906 20:07:15.553468   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.553476   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:15.553482   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:15.553528   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:15.589136   73230 cri.go:89] found id: ""
	I0906 20:07:15.589160   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.589168   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:15.589173   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:15.589220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:15.624410   73230 cri.go:89] found id: ""
	I0906 20:07:15.624434   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.624443   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:15.624449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:15.624492   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:15.661506   73230 cri.go:89] found id: ""
	I0906 20:07:15.661535   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.661547   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:15.661555   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:15.661615   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:15.699126   73230 cri.go:89] found id: ""
	I0906 20:07:15.699148   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.699155   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:15.699161   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:15.699207   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:15.736489   73230 cri.go:89] found id: ""
	I0906 20:07:15.736523   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.736534   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:15.736542   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:15.736604   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:15.771988   73230 cri.go:89] found id: ""
	I0906 20:07:15.772013   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.772020   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:15.772029   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:15.772045   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:15.822734   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:15.822765   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:15.836820   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:15.836872   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:15.915073   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:15.915111   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:15.915126   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:15.988476   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:15.988514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:18.528710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:18.541450   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:18.541526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:18.581278   73230 cri.go:89] found id: ""
	I0906 20:07:18.581308   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.581317   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:18.581323   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:18.581381   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:18.616819   73230 cri.go:89] found id: ""
	I0906 20:07:18.616843   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.616850   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:18.616871   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:18.616923   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:18.655802   73230 cri.go:89] found id: ""
	I0906 20:07:18.655827   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.655842   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:18.655849   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:18.655908   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:18.693655   73230 cri.go:89] found id: ""
	I0906 20:07:18.693679   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.693689   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:18.693696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:18.693779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:18.730882   73230 cri.go:89] found id: ""
	I0906 20:07:18.730914   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.730924   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:18.730931   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:18.730994   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:18.767219   73230 cri.go:89] found id: ""
	I0906 20:07:18.767243   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.767250   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:18.767256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:18.767316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:18.802207   73230 cri.go:89] found id: ""
	I0906 20:07:18.802230   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.802238   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:18.802243   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:18.802300   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:18.840449   73230 cri.go:89] found id: ""
	I0906 20:07:18.840471   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.840481   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:18.840491   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:18.840504   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:18.892430   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:18.892469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:18.906527   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:18.906561   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:18.980462   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:18.980483   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:18.980494   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:19.059550   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:19.059588   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:17.340292   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:19.840090   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.156486   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:20.657764   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.693021   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.191529   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.191865   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.599879   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:21.614131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:21.614205   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:21.650887   73230 cri.go:89] found id: ""
	I0906 20:07:21.650910   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.650919   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:21.650924   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:21.650978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:21.684781   73230 cri.go:89] found id: ""
	I0906 20:07:21.684809   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.684819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:21.684827   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:21.684907   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:21.722685   73230 cri.go:89] found id: ""
	I0906 20:07:21.722711   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.722722   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:21.722729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:21.722791   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:21.757581   73230 cri.go:89] found id: ""
	I0906 20:07:21.757607   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.757616   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:21.757622   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:21.757670   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:21.791984   73230 cri.go:89] found id: ""
	I0906 20:07:21.792008   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.792016   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:21.792022   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:21.792072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:21.853612   73230 cri.go:89] found id: ""
	I0906 20:07:21.853636   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.853644   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:21.853650   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:21.853699   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:21.894184   73230 cri.go:89] found id: ""
	I0906 20:07:21.894232   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.894247   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:21.894256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:21.894318   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:21.930731   73230 cri.go:89] found id: ""
	I0906 20:07:21.930758   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.930768   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:21.930779   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:21.930798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:21.969174   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:21.969207   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:22.017647   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:22.017680   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:22.033810   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:22.033852   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:22.111503   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:22.111530   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:22.111544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:24.696348   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:24.710428   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:24.710506   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:24.747923   73230 cri.go:89] found id: ""
	I0906 20:07:24.747958   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.747969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:24.747977   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:24.748037   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:24.782216   73230 cri.go:89] found id: ""
	I0906 20:07:24.782250   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.782260   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:24.782268   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:24.782329   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:24.822093   73230 cri.go:89] found id: ""
	I0906 20:07:24.822126   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.822137   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:24.822148   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:24.822217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:24.857166   73230 cri.go:89] found id: ""
	I0906 20:07:24.857202   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.857213   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:24.857224   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:24.857314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:24.892575   73230 cri.go:89] found id: ""
	I0906 20:07:24.892610   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.892621   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:24.892629   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:24.892689   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:24.929102   73230 cri.go:89] found id: ""
	I0906 20:07:24.929130   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.929140   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:24.929149   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:24.929206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:24.964224   73230 cri.go:89] found id: ""
	I0906 20:07:24.964257   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.964268   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:24.964276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:24.964337   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:25.000453   73230 cri.go:89] found id: ""
	I0906 20:07:25.000475   73230 logs.go:276] 0 containers: []
	W0906 20:07:25.000485   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:25.000496   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:25.000511   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:25.041824   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:25.041851   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:25.093657   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:25.093692   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:25.107547   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:25.107576   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:25.178732   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:25.178755   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:25.178771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:22.338864   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:24.339432   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:26.838165   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.156449   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.156979   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.158086   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.192653   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.693480   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.764271   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:27.777315   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:27.777389   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:27.812621   73230 cri.go:89] found id: ""
	I0906 20:07:27.812644   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.812655   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:27.812663   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:27.812718   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:27.853063   73230 cri.go:89] found id: ""
	I0906 20:07:27.853093   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.853104   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:27.853112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:27.853171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:27.894090   73230 cri.go:89] found id: ""
	I0906 20:07:27.894118   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.894130   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:27.894137   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:27.894196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:27.930764   73230 cri.go:89] found id: ""
	I0906 20:07:27.930791   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.930802   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:27.930809   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:27.930870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:27.967011   73230 cri.go:89] found id: ""
	I0906 20:07:27.967036   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.967047   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:27.967053   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:27.967111   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:28.002119   73230 cri.go:89] found id: ""
	I0906 20:07:28.002146   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.002157   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:28.002164   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:28.002226   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:28.043884   73230 cri.go:89] found id: ""
	I0906 20:07:28.043909   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.043917   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:28.043923   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:28.043979   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:28.081510   73230 cri.go:89] found id: ""
	I0906 20:07:28.081538   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.081547   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:28.081557   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:28.081568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:28.159077   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:28.159109   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:28.207489   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:28.207527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:28.267579   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:28.267613   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:28.287496   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:28.287529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:28.376555   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:28.838301   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.843091   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:29.655598   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:31.657757   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.192112   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:32.692354   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.876683   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:30.890344   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:30.890424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:30.930618   73230 cri.go:89] found id: ""
	I0906 20:07:30.930647   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.930658   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:30.930666   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:30.930727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:30.968801   73230 cri.go:89] found id: ""
	I0906 20:07:30.968825   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.968834   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:30.968839   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:30.968911   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:31.006437   73230 cri.go:89] found id: ""
	I0906 20:07:31.006463   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.006472   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:31.006477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:31.006531   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:31.042091   73230 cri.go:89] found id: ""
	I0906 20:07:31.042117   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.042125   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:31.042131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:31.042177   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:31.079244   73230 cri.go:89] found id: ""
	I0906 20:07:31.079271   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.079280   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:31.079286   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:31.079336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:31.116150   73230 cri.go:89] found id: ""
	I0906 20:07:31.116174   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.116182   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:31.116188   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:31.116240   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:31.151853   73230 cri.go:89] found id: ""
	I0906 20:07:31.151877   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.151886   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:31.151892   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:31.151939   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:31.189151   73230 cri.go:89] found id: ""
	I0906 20:07:31.189181   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.189192   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:31.189203   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:31.189218   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:31.234466   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:31.234493   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:31.286254   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:31.286288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:31.300500   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:31.300525   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:31.372968   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:31.372987   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:31.372997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:33.949865   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:33.964791   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:33.964849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:34.027049   73230 cri.go:89] found id: ""
	I0906 20:07:34.027082   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.027094   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:34.027102   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:34.027162   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:34.080188   73230 cri.go:89] found id: ""
	I0906 20:07:34.080218   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.080230   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:34.080237   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:34.080320   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:34.124146   73230 cri.go:89] found id: ""
	I0906 20:07:34.124171   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.124179   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:34.124185   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:34.124230   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:34.161842   73230 cri.go:89] found id: ""
	I0906 20:07:34.161864   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.161872   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:34.161878   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:34.161938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:34.201923   73230 cri.go:89] found id: ""
	I0906 20:07:34.201951   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.201961   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:34.201967   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:34.202032   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:34.246609   73230 cri.go:89] found id: ""
	I0906 20:07:34.246644   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.246656   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:34.246665   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:34.246739   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:34.287616   73230 cri.go:89] found id: ""
	I0906 20:07:34.287646   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.287657   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:34.287663   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:34.287721   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:34.322270   73230 cri.go:89] found id: ""
	I0906 20:07:34.322297   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.322309   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:34.322320   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:34.322334   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:34.378598   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:34.378633   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:34.392748   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:34.392781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:34.468620   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:34.468648   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:34.468663   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:34.548290   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:34.548324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:33.339665   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.339890   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:34.157895   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:36.656829   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.192386   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.192574   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.095962   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:37.110374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:37.110459   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:37.146705   73230 cri.go:89] found id: ""
	I0906 20:07:37.146732   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.146740   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:37.146746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:37.146802   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:37.185421   73230 cri.go:89] found id: ""
	I0906 20:07:37.185449   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.185461   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:37.185468   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:37.185532   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:37.224767   73230 cri.go:89] found id: ""
	I0906 20:07:37.224793   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.224801   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:37.224806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:37.224884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:37.265392   73230 cri.go:89] found id: ""
	I0906 20:07:37.265422   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.265432   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:37.265438   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:37.265496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:37.302065   73230 cri.go:89] found id: ""
	I0906 20:07:37.302093   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.302101   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:37.302107   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:37.302171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:37.341466   73230 cri.go:89] found id: ""
	I0906 20:07:37.341493   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.341505   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:37.341513   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:37.341576   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.377701   73230 cri.go:89] found id: ""
	I0906 20:07:37.377724   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.377732   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:37.377738   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:37.377798   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:37.412927   73230 cri.go:89] found id: ""
	I0906 20:07:37.412955   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.412966   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:37.412977   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:37.412993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:37.427750   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:37.427776   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:37.500904   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:37.500928   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:37.500945   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:37.583204   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:37.583246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:37.623477   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:37.623512   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.179798   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:40.194295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:40.194372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:40.229731   73230 cri.go:89] found id: ""
	I0906 20:07:40.229768   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.229779   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:40.229787   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:40.229848   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:40.275909   73230 cri.go:89] found id: ""
	I0906 20:07:40.275943   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.275956   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:40.275964   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:40.276049   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:40.316552   73230 cri.go:89] found id: ""
	I0906 20:07:40.316585   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.316594   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:40.316599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:40.316647   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:40.355986   73230 cri.go:89] found id: ""
	I0906 20:07:40.356017   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.356028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:40.356036   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:40.356095   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:40.396486   73230 cri.go:89] found id: ""
	I0906 20:07:40.396522   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.396535   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:40.396544   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:40.396609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:40.440311   73230 cri.go:89] found id: ""
	I0906 20:07:40.440338   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.440346   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:40.440352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:40.440414   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.346532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.839521   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.156737   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.156967   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.691703   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.691972   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:40.476753   73230 cri.go:89] found id: ""
	I0906 20:07:40.476781   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.476790   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:40.476797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:40.476844   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:40.514462   73230 cri.go:89] found id: ""
	I0906 20:07:40.514489   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.514500   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:40.514511   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:40.514527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:40.553670   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:40.553700   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.608304   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:40.608343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:40.622486   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:40.622514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:40.699408   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:40.699434   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:40.699451   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.278892   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:43.292455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:43.292526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:43.328900   73230 cri.go:89] found id: ""
	I0906 20:07:43.328929   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.328940   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:43.328948   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:43.329009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:43.366728   73230 cri.go:89] found id: ""
	I0906 20:07:43.366754   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.366762   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:43.366768   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:43.366817   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:43.401566   73230 cri.go:89] found id: ""
	I0906 20:07:43.401590   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.401599   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:43.401604   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:43.401650   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:43.437022   73230 cri.go:89] found id: ""
	I0906 20:07:43.437051   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.437063   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:43.437072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:43.437140   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:43.473313   73230 cri.go:89] found id: ""
	I0906 20:07:43.473342   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.473354   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:43.473360   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:43.473420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:43.513590   73230 cri.go:89] found id: ""
	I0906 20:07:43.513616   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.513624   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:43.513630   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:43.513690   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:43.549974   73230 cri.go:89] found id: ""
	I0906 20:07:43.550011   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.550025   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:43.550032   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:43.550100   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:43.592386   73230 cri.go:89] found id: ""
	I0906 20:07:43.592426   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.592444   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:43.592454   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:43.592482   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:43.607804   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:43.607841   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:43.679533   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:43.679568   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:43.679580   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.762111   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:43.762145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:43.802883   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:43.802908   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:42.340252   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:44.838648   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.838831   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.157956   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.657410   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.693014   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.693640   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.191509   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.358429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:46.371252   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:46.371326   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:46.406397   73230 cri.go:89] found id: ""
	I0906 20:07:46.406420   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.406430   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:46.406437   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:46.406496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:46.452186   73230 cri.go:89] found id: ""
	I0906 20:07:46.452209   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.452218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:46.452223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:46.452288   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:46.489418   73230 cri.go:89] found id: ""
	I0906 20:07:46.489443   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.489454   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:46.489461   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:46.489523   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:46.529650   73230 cri.go:89] found id: ""
	I0906 20:07:46.529679   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.529690   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:46.529698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:46.529760   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:46.566429   73230 cri.go:89] found id: ""
	I0906 20:07:46.566454   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.566466   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:46.566474   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:46.566539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:46.604999   73230 cri.go:89] found id: ""
	I0906 20:07:46.605026   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.605034   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:46.605040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:46.605085   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:46.643116   73230 cri.go:89] found id: ""
	I0906 20:07:46.643144   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.643155   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:46.643162   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:46.643222   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:46.679734   73230 cri.go:89] found id: ""
	I0906 20:07:46.679756   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.679764   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:46.679772   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:46.679784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:46.736380   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:46.736430   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:46.750649   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:46.750681   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:46.833098   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:46.833130   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:46.833146   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:46.912223   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:46.912267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.453662   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:49.466520   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:49.466585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:49.508009   73230 cri.go:89] found id: ""
	I0906 20:07:49.508038   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.508049   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:49.508056   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:49.508119   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:49.545875   73230 cri.go:89] found id: ""
	I0906 20:07:49.545900   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.545911   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:49.545918   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:49.545978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:49.584899   73230 cri.go:89] found id: ""
	I0906 20:07:49.584926   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.584933   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:49.584940   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:49.585001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:49.621044   73230 cri.go:89] found id: ""
	I0906 20:07:49.621073   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.621085   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:49.621092   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:49.621146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:49.657074   73230 cri.go:89] found id: ""
	I0906 20:07:49.657099   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.657108   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:49.657115   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:49.657174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:49.693734   73230 cri.go:89] found id: ""
	I0906 20:07:49.693759   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.693767   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:49.693773   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:49.693827   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:49.729920   73230 cri.go:89] found id: ""
	I0906 20:07:49.729950   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.729960   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:49.729965   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:49.730014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:49.765282   73230 cri.go:89] found id: ""
	I0906 20:07:49.765313   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.765324   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:49.765335   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:49.765350   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:49.842509   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:49.842531   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:49.842543   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:49.920670   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:49.920704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.961193   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:49.961220   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:50.014331   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:50.014366   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:48.839877   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:51.339381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.156290   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.157337   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.692055   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:53.191487   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.529758   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:52.543533   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:52.543596   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:52.582802   73230 cri.go:89] found id: ""
	I0906 20:07:52.582826   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.582838   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:52.582845   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:52.582909   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:52.625254   73230 cri.go:89] found id: ""
	I0906 20:07:52.625287   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.625308   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:52.625317   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:52.625383   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:52.660598   73230 cri.go:89] found id: ""
	I0906 20:07:52.660621   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.660632   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:52.660640   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:52.660703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:52.702980   73230 cri.go:89] found id: ""
	I0906 20:07:52.703004   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.703014   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:52.703021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:52.703082   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:52.740361   73230 cri.go:89] found id: ""
	I0906 20:07:52.740387   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.740394   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:52.740400   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:52.740447   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:52.780011   73230 cri.go:89] found id: ""
	I0906 20:07:52.780043   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.780056   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:52.780063   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:52.780123   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:52.825546   73230 cri.go:89] found id: ""
	I0906 20:07:52.825583   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.825595   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:52.825602   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:52.825659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:52.864347   73230 cri.go:89] found id: ""
	I0906 20:07:52.864381   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.864393   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:52.864403   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:52.864417   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:52.943041   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:52.943077   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:52.986158   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:52.986185   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:53.039596   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:53.039635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:53.054265   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:53.054295   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:53.125160   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:53.339887   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.839233   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.657521   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.157101   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.192803   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.692328   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.626058   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:55.639631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:55.639705   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:55.677283   73230 cri.go:89] found id: ""
	I0906 20:07:55.677304   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.677312   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:55.677317   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:55.677372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:55.714371   73230 cri.go:89] found id: ""
	I0906 20:07:55.714402   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.714414   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:55.714422   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:55.714509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:55.753449   73230 cri.go:89] found id: ""
	I0906 20:07:55.753487   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.753500   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:55.753507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:55.753575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:55.792955   73230 cri.go:89] found id: ""
	I0906 20:07:55.792987   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.792999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:55.793006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:55.793074   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:55.827960   73230 cri.go:89] found id: ""
	I0906 20:07:55.827985   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.827996   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:55.828003   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:55.828052   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:55.867742   73230 cri.go:89] found id: ""
	I0906 20:07:55.867765   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.867778   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:55.867785   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:55.867849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:55.907328   73230 cri.go:89] found id: ""
	I0906 20:07:55.907352   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.907359   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:55.907365   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:55.907424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:55.946057   73230 cri.go:89] found id: ""
	I0906 20:07:55.946091   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.946099   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:55.946108   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:55.946119   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:56.033579   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:56.033598   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:56.033611   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:56.116337   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:56.116372   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:56.163397   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:56.163428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:56.217189   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:56.217225   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:58.736147   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:58.749729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:58.749833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:58.786375   73230 cri.go:89] found id: ""
	I0906 20:07:58.786399   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.786406   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:58.786412   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:58.786460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:58.825188   73230 cri.go:89] found id: ""
	I0906 20:07:58.825210   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.825218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:58.825223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:58.825271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:58.866734   73230 cri.go:89] found id: ""
	I0906 20:07:58.866756   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.866764   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:58.866769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:58.866823   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:58.909742   73230 cri.go:89] found id: ""
	I0906 20:07:58.909774   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.909785   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:58.909793   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:58.909850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:58.950410   73230 cri.go:89] found id: ""
	I0906 20:07:58.950438   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.950447   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:58.950452   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:58.950500   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:58.987431   73230 cri.go:89] found id: ""
	I0906 20:07:58.987454   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.987462   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:58.987468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:58.987518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:59.023432   73230 cri.go:89] found id: ""
	I0906 20:07:59.023462   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.023474   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:59.023482   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:59.023544   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:59.057695   73230 cri.go:89] found id: ""
	I0906 20:07:59.057724   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.057734   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:59.057743   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:59.057755   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:59.109634   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:59.109671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:59.125436   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:59.125479   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:59.202018   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:59.202040   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:59.202054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:59.281418   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:59.281456   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:58.339751   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.842794   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.658145   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.155679   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.157913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.192179   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.193068   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:01.823947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:01.839055   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:01.839115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:01.876178   73230 cri.go:89] found id: ""
	I0906 20:08:01.876206   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.876215   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:01.876220   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:01.876274   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:01.912000   73230 cri.go:89] found id: ""
	I0906 20:08:01.912028   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.912038   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:01.912045   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:01.912107   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:01.948382   73230 cri.go:89] found id: ""
	I0906 20:08:01.948412   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.948420   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:01.948426   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:01.948474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:01.982991   73230 cri.go:89] found id: ""
	I0906 20:08:01.983019   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.983028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:01.983033   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:01.983080   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:02.016050   73230 cri.go:89] found id: ""
	I0906 20:08:02.016076   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.016085   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:02.016091   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:02.016151   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:02.051087   73230 cri.go:89] found id: ""
	I0906 20:08:02.051125   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.051137   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:02.051150   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:02.051214   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:02.093230   73230 cri.go:89] found id: ""
	I0906 20:08:02.093254   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.093263   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:02.093268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:02.093323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:02.130580   73230 cri.go:89] found id: ""
	I0906 20:08:02.130609   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.130619   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:02.130629   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:02.130644   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:02.183192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:02.183231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:02.199079   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:02.199110   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:02.274259   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:02.274279   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:02.274303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:02.356198   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:02.356234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:04.899180   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:04.912879   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:04.912955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:04.950598   73230 cri.go:89] found id: ""
	I0906 20:08:04.950632   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.950642   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:04.950656   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:04.950713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:04.986474   73230 cri.go:89] found id: ""
	I0906 20:08:04.986504   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.986513   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:04.986519   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:04.986570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:05.025837   73230 cri.go:89] found id: ""
	I0906 20:08:05.025868   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.025877   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:05.025884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:05.025934   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:05.063574   73230 cri.go:89] found id: ""
	I0906 20:08:05.063613   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.063622   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:05.063628   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:05.063674   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:05.101341   73230 cri.go:89] found id: ""
	I0906 20:08:05.101371   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.101383   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:05.101390   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:05.101461   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:05.148551   73230 cri.go:89] found id: ""
	I0906 20:08:05.148580   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.148591   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:05.148599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:05.148668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:05.186907   73230 cri.go:89] found id: ""
	I0906 20:08:05.186935   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.186945   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:05.186953   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:05.187019   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:05.226237   73230 cri.go:89] found id: ""
	I0906 20:08:05.226265   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.226275   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:05.226287   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:05.226300   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:05.242892   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:05.242925   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:05.317797   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:05.317824   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:05.317839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:05.400464   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:05.400500   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:05.442632   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:05.442657   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:03.340541   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:05.840156   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.655913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:06.657424   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.691255   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.191739   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.998033   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:08.012363   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:08.012441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:08.048816   73230 cri.go:89] found id: ""
	I0906 20:08:08.048847   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.048876   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:08.048884   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:08.048947   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:08.109623   73230 cri.go:89] found id: ""
	I0906 20:08:08.109650   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.109661   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:08.109668   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:08.109730   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:08.145405   73230 cri.go:89] found id: ""
	I0906 20:08:08.145432   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.145443   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:08.145451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:08.145514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:08.187308   73230 cri.go:89] found id: ""
	I0906 20:08:08.187344   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.187355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:08.187362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:08.187422   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:08.228782   73230 cri.go:89] found id: ""
	I0906 20:08:08.228815   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.228826   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:08.228833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:08.228918   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:08.269237   73230 cri.go:89] found id: ""
	I0906 20:08:08.269266   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.269276   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:08.269285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:08.269351   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:08.305115   73230 cri.go:89] found id: ""
	I0906 20:08:08.305141   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.305149   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:08.305155   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:08.305206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:08.345442   73230 cri.go:89] found id: ""
	I0906 20:08:08.345472   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.345483   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:08.345494   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:08.345510   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:08.396477   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:08.396518   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:08.410978   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:08.411002   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:08.486220   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:08.486247   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:08.486265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:08.574138   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:08.574190   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:08.339280   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:10.340142   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.156809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.160037   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.192303   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.192456   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.192684   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.117545   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:11.131884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:11.131944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:11.169481   73230 cri.go:89] found id: ""
	I0906 20:08:11.169507   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.169518   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:11.169525   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:11.169590   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:11.211068   73230 cri.go:89] found id: ""
	I0906 20:08:11.211092   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.211100   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:11.211105   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:11.211157   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:11.250526   73230 cri.go:89] found id: ""
	I0906 20:08:11.250560   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.250574   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:11.250580   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:11.250627   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:11.289262   73230 cri.go:89] found id: ""
	I0906 20:08:11.289284   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.289292   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:11.289299   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:11.289346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:11.335427   73230 cri.go:89] found id: ""
	I0906 20:08:11.335456   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.335467   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:11.335475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:11.335535   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:11.375481   73230 cri.go:89] found id: ""
	I0906 20:08:11.375509   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.375518   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:11.375524   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:11.375575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:11.416722   73230 cri.go:89] found id: ""
	I0906 20:08:11.416748   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.416758   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:11.416765   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:11.416830   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:11.452986   73230 cri.go:89] found id: ""
	I0906 20:08:11.453019   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.453030   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:11.453042   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:11.453059   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:11.466435   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:11.466461   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:11.545185   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:11.545212   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:11.545231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:11.627390   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:11.627422   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:11.674071   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:11.674098   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.225887   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:14.242121   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:14.242200   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:14.283024   73230 cri.go:89] found id: ""
	I0906 20:08:14.283055   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.283067   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:14.283074   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:14.283135   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:14.325357   73230 cri.go:89] found id: ""
	I0906 20:08:14.325379   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.325387   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:14.325392   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:14.325455   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:14.362435   73230 cri.go:89] found id: ""
	I0906 20:08:14.362459   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.362467   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:14.362473   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:14.362537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:14.398409   73230 cri.go:89] found id: ""
	I0906 20:08:14.398441   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.398450   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:14.398455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:14.398509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:14.434902   73230 cri.go:89] found id: ""
	I0906 20:08:14.434934   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.434943   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:14.434950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:14.435009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:14.476605   73230 cri.go:89] found id: ""
	I0906 20:08:14.476635   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.476647   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:14.476655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:14.476717   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:14.533656   73230 cri.go:89] found id: ""
	I0906 20:08:14.533681   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.533690   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:14.533696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:14.533753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:14.599661   73230 cri.go:89] found id: ""
	I0906 20:08:14.599685   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.599693   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:14.599702   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:14.599715   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.657680   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:14.657712   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:14.671594   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:14.671624   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:14.747945   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:14.747969   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:14.747979   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:14.829021   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:14.829057   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:12.838805   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:14.839569   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.659405   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:16.156840   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:15.692205   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.693709   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.373569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:17.388910   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:17.388987   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:17.428299   73230 cri.go:89] found id: ""
	I0906 20:08:17.428335   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.428347   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:17.428354   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:17.428419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:17.464660   73230 cri.go:89] found id: ""
	I0906 20:08:17.464685   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.464692   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:17.464697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:17.464758   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:17.500018   73230 cri.go:89] found id: ""
	I0906 20:08:17.500047   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.500059   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:17.500067   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:17.500130   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:17.536345   73230 cri.go:89] found id: ""
	I0906 20:08:17.536375   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.536386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:17.536394   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:17.536456   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:17.574668   73230 cri.go:89] found id: ""
	I0906 20:08:17.574696   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.574707   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:17.574715   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:17.574780   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:17.611630   73230 cri.go:89] found id: ""
	I0906 20:08:17.611653   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.611663   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:17.611669   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:17.611713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:17.647610   73230 cri.go:89] found id: ""
	I0906 20:08:17.647639   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.647649   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:17.647657   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:17.647724   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:17.686204   73230 cri.go:89] found id: ""
	I0906 20:08:17.686233   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.686246   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:17.686260   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:17.686273   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:17.702040   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:17.702069   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:17.775033   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:17.775058   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:17.775074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:17.862319   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:17.862359   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:17.905567   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:17.905604   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:17.339116   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:19.839554   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:21.839622   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:18.157104   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.657604   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.191024   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:22.192687   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.457191   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:20.471413   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:20.471474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:20.533714   73230 cri.go:89] found id: ""
	I0906 20:08:20.533749   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.533765   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:20.533772   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:20.533833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:20.580779   73230 cri.go:89] found id: ""
	I0906 20:08:20.580811   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.580823   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:20.580830   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:20.580902   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:20.619729   73230 cri.go:89] found id: ""
	I0906 20:08:20.619755   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.619763   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:20.619769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:20.619816   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:20.661573   73230 cri.go:89] found id: ""
	I0906 20:08:20.661599   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.661606   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:20.661612   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:20.661664   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:20.709409   73230 cri.go:89] found id: ""
	I0906 20:08:20.709443   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.709455   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:20.709463   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:20.709515   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:20.746743   73230 cri.go:89] found id: ""
	I0906 20:08:20.746783   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.746808   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:20.746816   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:20.746891   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:20.788129   73230 cri.go:89] found id: ""
	I0906 20:08:20.788155   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.788164   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:20.788170   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:20.788217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:20.825115   73230 cri.go:89] found id: ""
	I0906 20:08:20.825139   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.825147   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:20.825156   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:20.825167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:20.880975   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:20.881013   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:20.895027   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:20.895061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:20.972718   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:20.972739   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:20.972754   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:21.053062   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:21.053096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:23.595439   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:23.612354   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:23.612419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:23.654479   73230 cri.go:89] found id: ""
	I0906 20:08:23.654508   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.654519   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:23.654526   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:23.654591   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:23.690061   73230 cri.go:89] found id: ""
	I0906 20:08:23.690092   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.690103   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:23.690112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:23.690173   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:23.726644   73230 cri.go:89] found id: ""
	I0906 20:08:23.726670   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.726678   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:23.726684   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:23.726744   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:23.763348   73230 cri.go:89] found id: ""
	I0906 20:08:23.763378   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.763386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:23.763391   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:23.763452   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:23.799260   73230 cri.go:89] found id: ""
	I0906 20:08:23.799290   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.799299   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:23.799305   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:23.799359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:23.843438   73230 cri.go:89] found id: ""
	I0906 20:08:23.843470   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.843481   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:23.843489   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:23.843558   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:23.879818   73230 cri.go:89] found id: ""
	I0906 20:08:23.879847   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.879856   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:23.879867   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:23.879933   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:23.916182   73230 cri.go:89] found id: ""
	I0906 20:08:23.916207   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.916220   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:23.916229   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:23.916240   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:23.987003   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:23.987022   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:23.987033   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:24.073644   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:24.073684   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:24.118293   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:24.118328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:24.172541   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:24.172582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:23.840441   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.338539   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:23.155661   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:25.155855   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:27.157624   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:24.692350   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.692534   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.687747   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:26.702174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:26.702238   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:26.740064   73230 cri.go:89] found id: ""
	I0906 20:08:26.740093   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.740101   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:26.740108   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:26.740158   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:26.775198   73230 cri.go:89] found id: ""
	I0906 20:08:26.775227   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.775237   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:26.775244   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:26.775303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:26.808850   73230 cri.go:89] found id: ""
	I0906 20:08:26.808892   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.808903   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:26.808915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:26.808974   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:26.842926   73230 cri.go:89] found id: ""
	I0906 20:08:26.842953   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.842964   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:26.842972   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:26.843031   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:26.878621   73230 cri.go:89] found id: ""
	I0906 20:08:26.878649   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.878658   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:26.878664   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:26.878713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:26.921816   73230 cri.go:89] found id: ""
	I0906 20:08:26.921862   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.921875   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:26.921884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:26.921952   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:26.960664   73230 cri.go:89] found id: ""
	I0906 20:08:26.960692   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.960702   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:26.960709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:26.960771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:27.004849   73230 cri.go:89] found id: ""
	I0906 20:08:27.004904   73230 logs.go:276] 0 containers: []
	W0906 20:08:27.004913   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:27.004922   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:27.004934   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:27.056237   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:27.056267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:27.071882   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:27.071904   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:27.143927   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:27.143949   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:27.143961   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:27.223901   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:27.223935   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:29.766615   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:29.780295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:29.780367   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:29.817745   73230 cri.go:89] found id: ""
	I0906 20:08:29.817775   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.817784   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:29.817790   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:29.817852   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:29.855536   73230 cri.go:89] found id: ""
	I0906 20:08:29.855559   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.855567   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:29.855572   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:29.855628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:29.895043   73230 cri.go:89] found id: ""
	I0906 20:08:29.895092   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.895104   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:29.895111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:29.895178   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:29.939225   73230 cri.go:89] found id: ""
	I0906 20:08:29.939248   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.939256   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:29.939262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:29.939331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:29.974166   73230 cri.go:89] found id: ""
	I0906 20:08:29.974190   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.974198   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:29.974203   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:29.974258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:30.009196   73230 cri.go:89] found id: ""
	I0906 20:08:30.009226   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.009237   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:30.009245   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:30.009310   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:30.043939   73230 cri.go:89] found id: ""
	I0906 20:08:30.043962   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.043970   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:30.043976   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:30.044023   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:30.080299   73230 cri.go:89] found id: ""
	I0906 20:08:30.080328   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.080336   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:30.080345   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:30.080356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:30.131034   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:30.131068   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:30.145502   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:30.145536   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:30.219941   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:30.219963   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:30.219978   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:30.307958   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:30.307995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:28.839049   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.338815   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.656748   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.657112   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.192284   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.193181   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.854002   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:32.867937   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:32.867998   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:32.906925   73230 cri.go:89] found id: ""
	I0906 20:08:32.906957   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.906969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:32.906976   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:32.907038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:32.946662   73230 cri.go:89] found id: ""
	I0906 20:08:32.946691   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.946702   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:32.946710   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:32.946771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:32.981908   73230 cri.go:89] found id: ""
	I0906 20:08:32.981936   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.981944   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:32.981950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:32.982001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:33.014902   73230 cri.go:89] found id: ""
	I0906 20:08:33.014930   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.014939   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:33.014945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:33.015055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:33.051265   73230 cri.go:89] found id: ""
	I0906 20:08:33.051290   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.051298   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:33.051310   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:33.051363   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:33.085436   73230 cri.go:89] found id: ""
	I0906 20:08:33.085468   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.085480   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:33.085487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:33.085552   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:33.121483   73230 cri.go:89] found id: ""
	I0906 20:08:33.121509   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.121517   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:33.121523   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:33.121578   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:33.159883   73230 cri.go:89] found id: ""
	I0906 20:08:33.159915   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.159926   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:33.159937   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:33.159953   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:33.174411   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:33.174442   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:33.243656   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:33.243694   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:33.243710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:33.321782   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:33.321823   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:33.363299   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:33.363335   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:33.339645   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.839545   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.650358   72441 pod_ready.go:82] duration metric: took 4m0.000296679s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:32.650386   72441 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:32.650410   72441 pod_ready.go:39] duration metric: took 4m12.042795571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:32.650440   72441 kubeadm.go:597] duration metric: took 4m19.97234293s to restartPrimaryControlPlane
	W0906 20:08:32.650505   72441 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:32.650542   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:33.692877   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:36.192090   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:38.192465   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.916159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:35.929190   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:35.929265   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:35.962853   73230 cri.go:89] found id: ""
	I0906 20:08:35.962890   73230 logs.go:276] 0 containers: []
	W0906 20:08:35.962901   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:35.962909   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:35.962969   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:36.000265   73230 cri.go:89] found id: ""
	I0906 20:08:36.000309   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.000318   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:36.000324   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:36.000374   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:36.042751   73230 cri.go:89] found id: ""
	I0906 20:08:36.042781   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.042792   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:36.042800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:36.042859   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:36.077922   73230 cri.go:89] found id: ""
	I0906 20:08:36.077957   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.077967   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:36.077975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:36.078038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:36.114890   73230 cri.go:89] found id: ""
	I0906 20:08:36.114926   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.114937   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:36.114945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:36.114997   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:36.148058   73230 cri.go:89] found id: ""
	I0906 20:08:36.148089   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.148101   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:36.148108   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:36.148167   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:36.187334   73230 cri.go:89] found id: ""
	I0906 20:08:36.187361   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.187371   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:36.187379   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:36.187498   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:36.221295   73230 cri.go:89] found id: ""
	I0906 20:08:36.221331   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.221342   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:36.221353   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:36.221367   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:36.273489   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:36.273527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:36.287975   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:36.288005   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:36.366914   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:36.366937   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:36.366950   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:36.446582   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:36.446619   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.987075   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:39.001051   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:39.001113   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:39.038064   73230 cri.go:89] found id: ""
	I0906 20:08:39.038093   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.038103   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:39.038110   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:39.038175   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:39.075759   73230 cri.go:89] found id: ""
	I0906 20:08:39.075788   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.075799   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:39.075805   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:39.075866   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:39.113292   73230 cri.go:89] found id: ""
	I0906 20:08:39.113320   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.113331   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:39.113339   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:39.113404   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:39.157236   73230 cri.go:89] found id: ""
	I0906 20:08:39.157269   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.157281   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:39.157289   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:39.157362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:39.195683   73230 cri.go:89] found id: ""
	I0906 20:08:39.195704   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.195712   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:39.195717   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:39.195763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:39.234865   73230 cri.go:89] found id: ""
	I0906 20:08:39.234894   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.234903   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:39.234909   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:39.234961   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:39.269946   73230 cri.go:89] found id: ""
	I0906 20:08:39.269975   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.269983   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:39.269989   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:39.270034   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:39.306184   73230 cri.go:89] found id: ""
	I0906 20:08:39.306214   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.306225   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:39.306235   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:39.306249   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:39.357887   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:39.357920   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:39.371736   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:39.371767   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:39.445674   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:39.445695   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:39.445708   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:39.525283   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:39.525316   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.343370   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.839247   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.691846   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.694807   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.069066   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:42.083229   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:42.083313   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:42.124243   73230 cri.go:89] found id: ""
	I0906 20:08:42.124267   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.124275   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:42.124280   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:42.124330   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:42.162070   73230 cri.go:89] found id: ""
	I0906 20:08:42.162102   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.162113   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:42.162120   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:42.162183   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:42.199161   73230 cri.go:89] found id: ""
	I0906 20:08:42.199191   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.199201   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:42.199208   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:42.199266   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:42.236956   73230 cri.go:89] found id: ""
	I0906 20:08:42.236980   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.236991   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:42.236996   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:42.237068   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:42.272299   73230 cri.go:89] found id: ""
	I0906 20:08:42.272328   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.272336   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:42.272341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:42.272400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:42.310280   73230 cri.go:89] found id: ""
	I0906 20:08:42.310304   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.310312   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:42.310317   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:42.310362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:42.345850   73230 cri.go:89] found id: ""
	I0906 20:08:42.345873   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.345881   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:42.345887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:42.345937   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:42.380785   73230 cri.go:89] found id: ""
	I0906 20:08:42.380812   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.380820   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:42.380830   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:42.380843   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.435803   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:42.435839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:42.450469   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:42.450498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:42.521565   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:42.521587   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:42.521599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:42.595473   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:42.595508   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:45.136985   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:45.150468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:45.150540   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:45.186411   73230 cri.go:89] found id: ""
	I0906 20:08:45.186440   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.186448   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:45.186454   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:45.186521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:45.224463   73230 cri.go:89] found id: ""
	I0906 20:08:45.224495   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.224506   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:45.224513   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:45.224568   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:45.262259   73230 cri.go:89] found id: ""
	I0906 20:08:45.262286   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.262295   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:45.262301   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:45.262357   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:45.299463   73230 cri.go:89] found id: ""
	I0906 20:08:45.299492   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.299501   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:45.299507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:45.299561   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:45.336125   73230 cri.go:89] found id: ""
	I0906 20:08:45.336153   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.336162   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:45.336168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:45.336216   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:45.370397   73230 cri.go:89] found id: ""
	I0906 20:08:45.370427   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.370439   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:45.370448   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:45.370518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:45.406290   73230 cri.go:89] found id: ""
	I0906 20:08:45.406322   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.406333   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:45.406341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:45.406402   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:45.441560   73230 cri.go:89] found id: ""
	I0906 20:08:45.441592   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.441603   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:45.441614   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:45.441627   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.840127   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.349331   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.192059   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:47.691416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.508769   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:45.508811   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:45.523659   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:45.523696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:45.595544   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:45.595567   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:45.595582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:45.676060   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:45.676096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:48.216490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:48.230021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:48.230093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:48.267400   73230 cri.go:89] found id: ""
	I0906 20:08:48.267433   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.267444   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:48.267451   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:48.267519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:48.314694   73230 cri.go:89] found id: ""
	I0906 20:08:48.314722   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.314731   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:48.314739   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:48.314805   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:48.358861   73230 cri.go:89] found id: ""
	I0906 20:08:48.358895   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.358906   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:48.358915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:48.358990   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:48.398374   73230 cri.go:89] found id: ""
	I0906 20:08:48.398400   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.398410   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:48.398416   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:48.398488   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:48.438009   73230 cri.go:89] found id: ""
	I0906 20:08:48.438039   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.438050   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:48.438058   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:48.438115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:48.475970   73230 cri.go:89] found id: ""
	I0906 20:08:48.475998   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.476007   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:48.476013   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:48.476071   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:48.512191   73230 cri.go:89] found id: ""
	I0906 20:08:48.512220   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.512230   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:48.512237   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:48.512299   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:48.547820   73230 cri.go:89] found id: ""
	I0906 20:08:48.547850   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.547861   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:48.547872   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:48.547886   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:48.616962   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:48.616997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:48.631969   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:48.631998   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:48.717025   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:48.717043   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:48.717054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:48.796131   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:48.796167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:47.838558   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.839063   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.839099   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.693239   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:52.191416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.342030   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:51.355761   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:51.355845   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:51.395241   73230 cri.go:89] found id: ""
	I0906 20:08:51.395272   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.395283   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:51.395290   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:51.395350   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:51.433860   73230 cri.go:89] found id: ""
	I0906 20:08:51.433888   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.433897   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:51.433904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:51.433968   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:51.475568   73230 cri.go:89] found id: ""
	I0906 20:08:51.475598   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.475608   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:51.475615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:51.475678   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:51.512305   73230 cri.go:89] found id: ""
	I0906 20:08:51.512329   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.512337   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:51.512342   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:51.512391   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:51.545796   73230 cri.go:89] found id: ""
	I0906 20:08:51.545819   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.545827   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:51.545833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:51.545884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:51.578506   73230 cri.go:89] found id: ""
	I0906 20:08:51.578531   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.578539   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:51.578545   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:51.578609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:51.616571   73230 cri.go:89] found id: ""
	I0906 20:08:51.616596   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.616609   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:51.616615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:51.616660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:51.651542   73230 cri.go:89] found id: ""
	I0906 20:08:51.651566   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.651580   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:51.651588   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:51.651599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:51.705160   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:51.705193   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:51.719450   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:51.719477   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:51.789775   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:51.789796   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:51.789809   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:51.870123   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:51.870158   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.411818   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:54.425759   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:54.425818   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:54.467920   73230 cri.go:89] found id: ""
	I0906 20:08:54.467943   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.467951   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:54.467956   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:54.468008   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:54.508324   73230 cri.go:89] found id: ""
	I0906 20:08:54.508349   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.508357   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:54.508363   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:54.508410   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:54.544753   73230 cri.go:89] found id: ""
	I0906 20:08:54.544780   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.544790   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:54.544797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:54.544884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:54.581407   73230 cri.go:89] found id: ""
	I0906 20:08:54.581436   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.581446   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:54.581453   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:54.581514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:54.618955   73230 cri.go:89] found id: ""
	I0906 20:08:54.618986   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.618998   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:54.619006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:54.619065   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:54.656197   73230 cri.go:89] found id: ""
	I0906 20:08:54.656229   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.656248   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:54.656255   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:54.656316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:54.697499   73230 cri.go:89] found id: ""
	I0906 20:08:54.697536   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.697544   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:54.697549   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:54.697600   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:54.734284   73230 cri.go:89] found id: ""
	I0906 20:08:54.734313   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.734331   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:54.734342   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:54.734356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:54.811079   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:54.811100   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:54.811111   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:54.887309   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:54.887346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.930465   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:54.930499   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:55.000240   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:55.000303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:54.339076   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:54.833352   72867 pod_ready.go:82] duration metric: took 4m0.000854511s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:54.833398   72867 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:54.833423   72867 pod_ready.go:39] duration metric: took 4m14.79685184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:54.833458   72867 kubeadm.go:597] duration metric: took 4m22.254900492s to restartPrimaryControlPlane
	W0906 20:08:54.833525   72867 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:54.833576   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:54.192038   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:56.192120   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:58.193505   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:57.530956   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:57.544056   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:57.544136   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:57.584492   73230 cri.go:89] found id: ""
	I0906 20:08:57.584519   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.584528   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:57.584534   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:57.584585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:57.620220   73230 cri.go:89] found id: ""
	I0906 20:08:57.620250   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.620259   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:57.620265   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:57.620321   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:57.655245   73230 cri.go:89] found id: ""
	I0906 20:08:57.655268   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.655283   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:57.655288   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:57.655346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:57.690439   73230 cri.go:89] found id: ""
	I0906 20:08:57.690470   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.690481   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:57.690487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:57.690551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:57.728179   73230 cri.go:89] found id: ""
	I0906 20:08:57.728206   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.728214   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:57.728221   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:57.728270   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:57.763723   73230 cri.go:89] found id: ""
	I0906 20:08:57.763752   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.763761   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:57.763767   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:57.763825   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:57.799836   73230 cri.go:89] found id: ""
	I0906 20:08:57.799861   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.799869   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:57.799876   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:57.799922   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:57.834618   73230 cri.go:89] found id: ""
	I0906 20:08:57.834644   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.834651   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:57.834660   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:57.834671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:57.887297   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:57.887331   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:57.901690   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:57.901717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:57.969179   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:57.969209   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:57.969223   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:58.052527   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:58.052642   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:58.870446   72441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.219876198s)
	I0906 20:08:58.870530   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:08:58.888197   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:08:58.899185   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:08:58.909740   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:08:58.909762   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:08:58.909806   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:08:58.919589   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:08:58.919646   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:08:58.930386   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:08:58.940542   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:08:58.940621   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:08:58.951673   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.963471   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:08:58.963545   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.974638   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:08:58.984780   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:08:58.984843   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:08:58.995803   72441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:08:59.046470   72441 kubeadm.go:310] W0906 20:08:59.003226    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.047297   72441 kubeadm.go:310] W0906 20:08:59.004193    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.166500   72441 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:00.691499   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:02.692107   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:00.593665   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:00.608325   73230 kubeadm.go:597] duration metric: took 4m4.153407014s to restartPrimaryControlPlane
	W0906 20:09:00.608399   73230 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:00.608428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:05.878028   73230 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.269561172s)
	I0906 20:09:05.878112   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:05.893351   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:05.904668   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:05.915560   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:05.915583   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:05.915633   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:09:05.926566   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:05.926625   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:05.937104   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:09:05.946406   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:05.946467   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:05.956203   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.965691   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:05.965751   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.976210   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:09:05.986104   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:05.986174   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:05.996282   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:06.068412   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:09:06.068507   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:06.213882   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:06.214044   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:06.214191   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:09:06.406793   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.067295   72441 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:07.067370   72441 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:07.067449   72441 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:07.067595   72441 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:07.067737   72441 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:07.067795   72441 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.069381   72441 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:07.069477   72441 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:07.069559   72441 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:07.069652   72441 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:07.069733   72441 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:07.069825   72441 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:07.069898   72441 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:07.069981   72441 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:07.070068   72441 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:07.070178   72441 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:07.070279   72441 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:07.070349   72441 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:07.070424   72441 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:07.070494   72441 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:07.070592   72441 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:07.070669   72441 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.070755   72441 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.070828   72441 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.070916   72441 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.070972   72441 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:07.072214   72441 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.072317   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.072399   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.072487   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.072613   72441 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.072685   72441 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.072719   72441 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.072837   72441 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:07.072977   72441 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:07.073063   72441 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.515053ms
	I0906 20:09:07.073178   72441 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:07.073257   72441 kubeadm.go:310] [api-check] The API server is healthy after 5.001748851s
	I0906 20:09:07.073410   72441 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:07.073558   72441 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:07.073650   72441 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:07.073860   72441 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-458066 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:07.073936   72441 kubeadm.go:310] [bootstrap-token] Using token: 3t2lf6.w44vkc4kfppuo2gp
	I0906 20:09:07.075394   72441 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:07.075524   72441 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:07.075621   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:07.075738   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:07.075905   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:07.076003   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:07.076094   72441 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:07.076222   72441 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:07.076397   72441 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:07.076486   72441 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:07.076502   72441 kubeadm.go:310] 
	I0906 20:09:07.076579   72441 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:07.076594   72441 kubeadm.go:310] 
	I0906 20:09:07.076687   72441 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:07.076698   72441 kubeadm.go:310] 
	I0906 20:09:07.076727   72441 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:07.076810   72441 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:07.076893   72441 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:07.076900   72441 kubeadm.go:310] 
	I0906 20:09:07.077016   72441 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:07.077029   72441 kubeadm.go:310] 
	I0906 20:09:07.077090   72441 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:07.077105   72441 kubeadm.go:310] 
	I0906 20:09:07.077172   72441 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:07.077273   72441 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:07.077368   72441 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:07.077377   72441 kubeadm.go:310] 
	I0906 20:09:07.077496   72441 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:07.077589   72441 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:07.077600   72441 kubeadm.go:310] 
	I0906 20:09:07.077680   72441 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.077767   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:07.077807   72441 kubeadm.go:310] 	--control-plane 
	I0906 20:09:07.077817   72441 kubeadm.go:310] 
	I0906 20:09:07.077927   72441 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:07.077946   72441 kubeadm.go:310] 
	I0906 20:09:07.078053   72441 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.078191   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:07.078206   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:09:07.078216   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:07.079782   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:07.080965   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:07.092500   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:07.112546   72441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:07.112618   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:07.112648   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-458066 minikube.k8s.io/updated_at=2024_09_06T20_09_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=embed-certs-458066 minikube.k8s.io/primary=true
	I0906 20:09:07.343125   72441 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:07.343284   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:06.408933   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:06.409043   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:06.409126   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:06.409242   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:06.409351   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:06.409445   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:06.409559   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:06.409666   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:06.409758   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:06.409870   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:06.409964   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:06.410010   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:06.410101   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:06.721268   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:06.888472   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.414908   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.505887   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.525704   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.525835   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.525913   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.699971   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:04.692422   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.193312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.701970   73230 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.702095   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.708470   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.710216   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.711016   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.714706   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:09:07.844097   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.344174   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.843884   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.343591   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.843748   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.344148   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.844002   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.343424   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.444023   72441 kubeadm.go:1113] duration metric: took 4.331471016s to wait for elevateKubeSystemPrivileges
	I0906 20:09:11.444067   72441 kubeadm.go:394] duration metric: took 4m58.815096997s to StartCluster
	I0906 20:09:11.444093   72441 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.444186   72441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:11.446093   72441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.446360   72441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:11.446430   72441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:11.446521   72441 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-458066"
	I0906 20:09:11.446542   72441 addons.go:69] Setting default-storageclass=true in profile "embed-certs-458066"
	I0906 20:09:11.446560   72441 addons.go:69] Setting metrics-server=true in profile "embed-certs-458066"
	I0906 20:09:11.446609   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:11.446615   72441 addons.go:234] Setting addon metrics-server=true in "embed-certs-458066"
	W0906 20:09:11.446663   72441 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:11.446694   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.446576   72441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-458066"
	I0906 20:09:11.446570   72441 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-458066"
	W0906 20:09:11.446779   72441 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:11.446810   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.447077   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447112   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447170   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447211   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447350   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447426   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447879   72441 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:11.449461   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:11.463673   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I0906 20:09:11.463676   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0906 20:09:11.464129   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464231   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464669   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464691   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.464675   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464745   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.465097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465139   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465608   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465634   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.465731   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465778   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.466622   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0906 20:09:11.466967   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.467351   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.467366   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.467622   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.467759   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.471093   72441 addons.go:234] Setting addon default-storageclass=true in "embed-certs-458066"
	W0906 20:09:11.471115   72441 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:11.471145   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.471524   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.471543   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.488980   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0906 20:09:11.489014   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0906 20:09:11.489399   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0906 20:09:11.489465   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489517   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489908   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.490116   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490134   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490144   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490158   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490411   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490427   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490481   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490872   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490886   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.491406   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.491500   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.491520   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.491619   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.493485   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.493901   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.495272   72441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:11.495274   72441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:11.496553   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:11.496575   72441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:11.496597   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.496647   72441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.496667   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:11.496684   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.500389   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500395   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500469   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.500786   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500808   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500952   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501105   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.501145   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501259   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501305   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.501389   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501501   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.510188   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0906 20:09:11.510617   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.511142   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.511169   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.511539   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.511754   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.513207   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.513439   72441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.513455   72441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:11.513474   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.516791   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517292   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.517323   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517563   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.517898   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.518085   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.518261   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.669057   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:11.705086   72441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731651   72441 node_ready.go:49] node "embed-certs-458066" has status "Ready":"True"
	I0906 20:09:11.731679   72441 node_ready.go:38] duration metric: took 26.546983ms for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731691   72441 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:11.740680   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:11.767740   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:11.767760   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:11.771571   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.804408   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:11.804435   72441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:11.844160   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.856217   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:11.856240   72441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:11.899134   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:13.159543   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.315345353s)
	I0906 20:09:13.159546   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387931315s)
	I0906 20:09:13.159639   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159660   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159601   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159711   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.159985   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.159997   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160008   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160018   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160080   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160095   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160104   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160115   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160265   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160289   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160401   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160417   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185478   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.185512   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.185914   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.185934   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185949   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.228561   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.329382232s)
	I0906 20:09:13.228621   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.228636   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228924   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.228978   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.228991   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.229001   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.229229   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.229258   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.229270   72441 addons.go:475] Verifying addon metrics-server=true in "embed-certs-458066"
	I0906 20:09:13.230827   72441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:09.691281   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:11.692514   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:13.231988   72441 addons.go:510] duration metric: took 1.785558897s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:13.750043   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.247314   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.748039   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:16.748064   72441 pod_ready.go:82] duration metric: took 5.007352361s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:16.748073   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:14.192167   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.691856   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:18.754580   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:19.254643   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:19.254669   72441 pod_ready.go:82] duration metric: took 2.506589666s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:19.254680   72441 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762162   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.762188   72441 pod_ready.go:82] duration metric: took 1.507501384s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762202   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770835   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.770860   72441 pod_ready.go:82] duration metric: took 8.65029ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770872   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779692   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.779713   72441 pod_ready.go:82] duration metric: took 8.832607ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779725   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786119   72441 pod_ready.go:93] pod "kube-proxy-rzx2f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.786146   72441 pod_ready.go:82] duration metric: took 6.414063ms for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786158   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852593   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.852630   72441 pod_ready.go:82] duration metric: took 66.461213ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852642   72441 pod_ready.go:39] duration metric: took 9.120937234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:20.852663   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:20.852729   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:20.871881   72441 api_server.go:72] duration metric: took 9.425481233s to wait for apiserver process to appear ...
	I0906 20:09:20.871911   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:20.871927   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:09:20.876997   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:09:20.878290   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:20.878314   72441 api_server.go:131] duration metric: took 6.396943ms to wait for apiserver health ...
	I0906 20:09:20.878324   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:21.057265   72441 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:21.057303   72441 system_pods.go:61] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.057312   72441 system_pods.go:61] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.057319   72441 system_pods.go:61] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.057326   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.057332   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.057338   72441 system_pods.go:61] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.057345   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.057356   72441 system_pods.go:61] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.057367   72441 system_pods.go:61] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.057381   72441 system_pods.go:74] duration metric: took 179.050809ms to wait for pod list to return data ...
	I0906 20:09:21.057394   72441 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:21.252816   72441 default_sa.go:45] found service account: "default"
	I0906 20:09:21.252842   72441 default_sa.go:55] duration metric: took 195.436403ms for default service account to be created ...
	I0906 20:09:21.252851   72441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:21.455714   72441 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:21.455742   72441 system_pods.go:89] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.455748   72441 system_pods.go:89] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.455752   72441 system_pods.go:89] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.455755   72441 system_pods.go:89] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.455759   72441 system_pods.go:89] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.455763   72441 system_pods.go:89] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.455766   72441 system_pods.go:89] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.455772   72441 system_pods.go:89] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.455776   72441 system_pods.go:89] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.455784   72441 system_pods.go:126] duration metric: took 202.909491ms to wait for k8s-apps to be running ...
	I0906 20:09:21.455791   72441 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:21.455832   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.474124   72441 system_svc.go:56] duration metric: took 18.325386ms WaitForService to wait for kubelet
	I0906 20:09:21.474150   72441 kubeadm.go:582] duration metric: took 10.027757317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:21.474172   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:21.653674   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:21.653697   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:21.653708   72441 node_conditions.go:105] duration metric: took 179.531797ms to run NodePressure ...
	I0906 20:09:21.653718   72441 start.go:241] waiting for startup goroutines ...
	I0906 20:09:21.653727   72441 start.go:246] waiting for cluster config update ...
	I0906 20:09:21.653740   72441 start.go:255] writing updated cluster config ...
	I0906 20:09:21.654014   72441 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:21.703909   72441 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:21.705502   72441 out.go:177] * Done! kubectl is now configured to use "embed-certs-458066" cluster and "default" namespace by default
	I0906 20:09:21.102986   72867 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.269383553s)
	I0906 20:09:21.103094   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.118935   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:21.129099   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:21.139304   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:21.139326   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:21.139374   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:09:21.149234   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:21.149289   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:21.160067   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:09:21.169584   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:21.169664   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:21.179885   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.190994   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:21.191062   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.201649   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:09:21.211165   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:21.211223   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:21.220998   72867 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:21.269780   72867 kubeadm.go:310] W0906 20:09:21.240800    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.270353   72867 kubeadm.go:310] W0906 20:09:21.241533    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.389445   72867 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:18.692475   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:21.193075   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:23.697031   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:26.191208   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:28.192166   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:30.493468   72867 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:30.493543   72867 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:30.493620   72867 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:30.493751   72867 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:30.493891   72867 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:30.493971   72867 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:30.495375   72867 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:30.495467   72867 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:30.495537   72867 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:30.495828   72867 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:30.495913   72867 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:30.495977   72867 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:30.496024   72867 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:30.496112   72867 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:30.496207   72867 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:30.496308   72867 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:30.496400   72867 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:30.496452   72867 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:30.496519   72867 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:30.496601   72867 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:30.496690   72867 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:30.496774   72867 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:30.496887   72867 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:30.496946   72867 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:30.497018   72867 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:30.497074   72867 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:30.498387   72867 out.go:235]   - Booting up control plane ...
	I0906 20:09:30.498472   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:30.498550   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:30.498616   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:30.498715   72867 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:30.498786   72867 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:30.498821   72867 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:30.498969   72867 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:30.499076   72867 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:30.499126   72867 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.325552ms
	I0906 20:09:30.499189   72867 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:30.499269   72867 kubeadm.go:310] [api-check] The API server is healthy after 5.002261512s
	I0906 20:09:30.499393   72867 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:30.499507   72867 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:30.499586   72867 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:30.499818   72867 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:30.499915   72867 kubeadm.go:310] [bootstrap-token] Using token: 6yha4r.f9kcjkhkq2u0pp1e
	I0906 20:09:30.501217   72867 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:30.501333   72867 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:30.501438   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:30.501630   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:30.501749   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:30.501837   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:30.501904   72867 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:30.501996   72867 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:30.502032   72867 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:30.502085   72867 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:30.502093   72867 kubeadm.go:310] 
	I0906 20:09:30.502153   72867 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:30.502166   72867 kubeadm.go:310] 
	I0906 20:09:30.502242   72867 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:30.502257   72867 kubeadm.go:310] 
	I0906 20:09:30.502290   72867 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:30.502358   72867 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:30.502425   72867 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:30.502433   72867 kubeadm.go:310] 
	I0906 20:09:30.502486   72867 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:30.502494   72867 kubeadm.go:310] 
	I0906 20:09:30.502529   72867 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:30.502536   72867 kubeadm.go:310] 
	I0906 20:09:30.502575   72867 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:30.502633   72867 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:30.502706   72867 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:30.502720   72867 kubeadm.go:310] 
	I0906 20:09:30.502791   72867 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:30.502882   72867 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:30.502893   72867 kubeadm.go:310] 
	I0906 20:09:30.502982   72867 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503099   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:30.503120   72867 kubeadm.go:310] 	--control-plane 
	I0906 20:09:30.503125   72867 kubeadm.go:310] 
	I0906 20:09:30.503240   72867 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:30.503247   72867 kubeadm.go:310] 
	I0906 20:09:30.503312   72867 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503406   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:30.503416   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:09:30.503424   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:30.504880   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:30.505997   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:30.517864   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:30.539641   72867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:30.539731   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653828 minikube.k8s.io/updated_at=2024_09_06T20_09_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=default-k8s-diff-port-653828 minikube.k8s.io/primary=true
	I0906 20:09:30.539732   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.576812   72867 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:30.742163   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.242299   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.742502   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.192201   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.691488   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.242418   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:32.742424   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.242317   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.742587   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.242563   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.342481   72867 kubeadm.go:1113] duration metric: took 3.802829263s to wait for elevateKubeSystemPrivileges
	I0906 20:09:34.342520   72867 kubeadm.go:394] duration metric: took 5m1.826839653s to StartCluster
	I0906 20:09:34.342542   72867 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.342640   72867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:34.345048   72867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.345461   72867 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:34.345576   72867 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:34.345655   72867 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345691   72867 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653828"
	I0906 20:09:34.345696   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:34.345699   72867 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345712   72867 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345737   72867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653828"
	W0906 20:09:34.345703   72867 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:34.345752   72867 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.345762   72867 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:34.345779   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.345795   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.346102   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346136   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346174   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346195   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346231   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346201   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.347895   72867 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:34.349535   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:34.363021   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0906 20:09:34.363492   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.364037   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.364062   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.364463   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.365147   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.365186   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.365991   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
	I0906 20:09:34.366024   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0906 20:09:34.366472   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366512   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366953   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.366970   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367086   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.367113   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367494   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367642   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367988   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.368011   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.368282   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.375406   72867 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.375432   72867 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:34.375460   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.375825   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.375858   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.382554   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0906 20:09:34.383102   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.383600   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.383616   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.383938   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.384214   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.385829   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.387409   72867 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:34.388348   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:34.388366   72867 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:34.388381   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.392542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.392813   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.392828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.393018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.393068   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0906 20:09:34.393374   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.393439   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.393550   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.393686   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.394089   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.394116   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.394464   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.394651   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.396559   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.396712   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0906 20:09:34.397142   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.397646   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.397669   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.397929   72867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:34.398023   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.398468   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.398511   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.399007   72867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.399024   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:34.399043   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.405024   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405057   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.405081   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405287   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.405479   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.405634   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.405752   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.414779   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0906 20:09:34.415230   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.415662   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.415679   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.415993   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.416151   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.417818   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.418015   72867 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.418028   72867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:34.418045   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.421303   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421379   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.421399   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421645   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.421815   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.421979   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.422096   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.582923   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:34.600692   72867 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617429   72867 node_ready.go:49] node "default-k8s-diff-port-653828" has status "Ready":"True"
	I0906 20:09:34.617454   72867 node_ready.go:38] duration metric: took 16.723446ms for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617465   72867 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:34.632501   72867 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:34.679561   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.682999   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.746380   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:34.746406   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:34.876650   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:34.876680   72867 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:34.935388   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:34.935415   72867 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:35.092289   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:35.709257   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02965114s)
	I0906 20:09:35.709297   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026263795s)
	I0906 20:09:35.709352   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709373   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709319   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709398   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709810   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.709911   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709898   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709926   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.709954   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709962   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709876   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710029   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710047   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.710065   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.710226   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710238   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710636   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.710665   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710681   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754431   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.754458   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.754765   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.754781   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754821   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.181191   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:36.181219   72867 pod_ready.go:82] duration metric: took 1.54868366s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.181233   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.351617   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.259284594s)
	I0906 20:09:36.351684   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.351701   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.351992   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352078   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352100   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.352111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.352055   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352402   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352914   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352934   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352945   72867 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653828"
	I0906 20:09:36.354972   72867 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:36.356127   72867 addons.go:510] duration metric: took 2.010554769s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:34.695700   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:37.193366   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:38.187115   72867 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:39.188966   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:39.188998   72867 pod_ready.go:82] duration metric: took 3.007757042s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:39.189012   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:41.196228   72867 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.206614   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.206636   72867 pod_ready.go:82] duration metric: took 3.017616218s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.206647   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212140   72867 pod_ready.go:93] pod "kube-proxy-7846f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.212165   72867 pod_ready.go:82] duration metric: took 5.512697ms for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212174   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217505   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.217527   72867 pod_ready.go:82] duration metric: took 5.346748ms for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217534   72867 pod_ready.go:39] duration metric: took 7.600058293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:42.217549   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:42.217600   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:42.235961   72867 api_server.go:72] duration metric: took 7.890460166s to wait for apiserver process to appear ...
	I0906 20:09:42.235987   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:42.236003   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:09:42.240924   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:09:42.241889   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:42.241912   72867 api_server.go:131] duration metric: took 5.919055ms to wait for apiserver health ...
	I0906 20:09:42.241922   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:42.247793   72867 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:42.247825   72867 system_pods.go:61] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.247833   72867 system_pods.go:61] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.247839   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.247845   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.247852   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.247857   72867 system_pods.go:61] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.247861   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.247866   72867 system_pods.go:61] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.247873   72867 system_pods.go:61] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.247883   72867 system_pods.go:74] duration metric: took 5.95413ms to wait for pod list to return data ...
	I0906 20:09:42.247893   72867 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:42.251260   72867 default_sa.go:45] found service account: "default"
	I0906 20:09:42.251277   72867 default_sa.go:55] duration metric: took 3.3795ms for default service account to be created ...
	I0906 20:09:42.251284   72867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:42.256204   72867 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:42.256228   72867 system_pods.go:89] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.256233   72867 system_pods.go:89] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.256237   72867 system_pods.go:89] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.256241   72867 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.256245   72867 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.256249   72867 system_pods.go:89] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.256252   72867 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.256258   72867 system_pods.go:89] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.256261   72867 system_pods.go:89] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.256270   72867 system_pods.go:126] duration metric: took 4.981383ms to wait for k8s-apps to be running ...
	I0906 20:09:42.256278   72867 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:42.256323   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:42.272016   72867 system_svc.go:56] duration metric: took 15.727796ms WaitForService to wait for kubelet
	I0906 20:09:42.272050   72867 kubeadm.go:582] duration metric: took 7.926551396s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:42.272081   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:42.275486   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:42.275516   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:42.275527   72867 node_conditions.go:105] duration metric: took 3.439966ms to run NodePressure ...
	I0906 20:09:42.275540   72867 start.go:241] waiting for startup goroutines ...
	I0906 20:09:42.275548   72867 start.go:246] waiting for cluster config update ...
	I0906 20:09:42.275561   72867 start.go:255] writing updated cluster config ...
	I0906 20:09:42.275823   72867 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:42.326049   72867 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:42.328034   72867 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653828" cluster and "default" namespace by default
	I0906 20:09:39.692393   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.192176   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:44.691934   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:45.185317   72322 pod_ready.go:82] duration metric: took 4m0.000138495s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	E0906 20:09:45.185352   72322 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:09:45.185371   72322 pod_ready.go:39] duration metric: took 4m12.222584677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:45.185403   72322 kubeadm.go:597] duration metric: took 4m20.152442555s to restartPrimaryControlPlane
	W0906 20:09:45.185466   72322 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:45.185496   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:47.714239   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:09:47.714464   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:47.714711   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:09:52.715187   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:52.715391   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:02.716155   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:02.716424   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:11.446625   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261097398s)
	I0906 20:10:11.446717   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:11.472899   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:10:11.492643   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:10:11.509855   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:10:11.509878   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:10:11.509933   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:10:11.523039   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:10:11.523099   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:10:11.540484   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:10:11.560246   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:10:11.560323   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:10:11.585105   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.596067   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:10:11.596138   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.607049   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:10:11.616982   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:10:11.617058   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:10:11.627880   72322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:10:11.672079   72322 kubeadm.go:310] W0906 20:10:11.645236    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.672935   72322 kubeadm.go:310] W0906 20:10:11.646151    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.789722   72322 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:10:20.270339   72322 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:10:20.270450   72322 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:10:20.270551   72322 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:10:20.270697   72322 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:10:20.270837   72322 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:10:20.270932   72322 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:10:20.272324   72322 out.go:235]   - Generating certificates and keys ...
	I0906 20:10:20.272437   72322 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:10:20.272530   72322 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:10:20.272634   72322 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:10:20.272732   72322 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:10:20.272842   72322 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:10:20.272950   72322 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:10:20.273051   72322 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:10:20.273135   72322 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:10:20.273272   72322 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:10:20.273361   72322 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:10:20.273400   72322 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:10:20.273456   72322 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:10:20.273517   72322 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:10:20.273571   72322 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:10:20.273625   72322 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:10:20.273682   72322 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:10:20.273731   72322 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:10:20.273801   72322 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:10:20.273856   72322 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:10:20.275359   72322 out.go:235]   - Booting up control plane ...
	I0906 20:10:20.275466   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:10:20.275539   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:10:20.275595   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:10:20.275692   72322 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:10:20.275774   72322 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:10:20.275812   72322 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:10:20.275917   72322 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:10:20.276005   72322 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:10:20.276063   72322 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001365031s
	I0906 20:10:20.276127   72322 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:10:20.276189   72322 kubeadm.go:310] [api-check] The API server is healthy after 5.002810387s
	I0906 20:10:20.276275   72322 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:10:20.276410   72322 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:10:20.276480   72322 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:10:20.276639   72322 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-504385 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:10:20.276690   72322 kubeadm.go:310] [bootstrap-token] Using token: fv12w2.cc6vcthx5yn6r6ru
	I0906 20:10:20.277786   72322 out.go:235]   - Configuring RBAC rules ...
	I0906 20:10:20.277872   72322 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:10:20.277941   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:10:20.278082   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:10:20.278231   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:10:20.278351   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:10:20.278426   72322 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:10:20.278541   72322 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:10:20.278614   72322 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:10:20.278692   72322 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:10:20.278700   72322 kubeadm.go:310] 
	I0906 20:10:20.278780   72322 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:10:20.278790   72322 kubeadm.go:310] 
	I0906 20:10:20.278880   72322 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:10:20.278889   72322 kubeadm.go:310] 
	I0906 20:10:20.278932   72322 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:10:20.279023   72322 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:10:20.279079   72322 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:10:20.279086   72322 kubeadm.go:310] 
	I0906 20:10:20.279141   72322 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:10:20.279148   72322 kubeadm.go:310] 
	I0906 20:10:20.279186   72322 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:10:20.279195   72322 kubeadm.go:310] 
	I0906 20:10:20.279291   72322 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:10:20.279420   72322 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:10:20.279524   72322 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:10:20.279535   72322 kubeadm.go:310] 
	I0906 20:10:20.279647   72322 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:10:20.279756   72322 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:10:20.279767   72322 kubeadm.go:310] 
	I0906 20:10:20.279896   72322 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280043   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:10:20.280080   72322 kubeadm.go:310] 	--control-plane 
	I0906 20:10:20.280090   72322 kubeadm.go:310] 
	I0906 20:10:20.280230   72322 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:10:20.280258   72322 kubeadm.go:310] 
	I0906 20:10:20.280365   72322 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280514   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:10:20.280532   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:10:20.280541   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:10:20.282066   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:10:20.283228   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:10:20.294745   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:10:20.317015   72322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-504385 minikube.k8s.io/updated_at=2024_09_06T20_10_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=no-preload-504385 minikube.k8s.io/primary=true
	I0906 20:10:20.528654   72322 ops.go:34] apiserver oom_adj: -16
	I0906 20:10:20.528681   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.029394   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.528922   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.029667   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.528814   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.029163   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.529709   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.029277   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.529466   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.668636   72322 kubeadm.go:1113] duration metric: took 4.351557657s to wait for elevateKubeSystemPrivileges
	I0906 20:10:24.668669   72322 kubeadm.go:394] duration metric: took 4m59.692142044s to StartCluster
	I0906 20:10:24.668690   72322 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.668775   72322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:10:24.670483   72322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.670765   72322 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:10:24.670874   72322 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:10:24.670975   72322 addons.go:69] Setting storage-provisioner=true in profile "no-preload-504385"
	I0906 20:10:24.670990   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:10:24.671015   72322 addons.go:234] Setting addon storage-provisioner=true in "no-preload-504385"
	W0906 20:10:24.671027   72322 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:10:24.670988   72322 addons.go:69] Setting default-storageclass=true in profile "no-preload-504385"
	I0906 20:10:24.671020   72322 addons.go:69] Setting metrics-server=true in profile "no-preload-504385"
	I0906 20:10:24.671053   72322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-504385"
	I0906 20:10:24.671069   72322 addons.go:234] Setting addon metrics-server=true in "no-preload-504385"
	I0906 20:10:24.671057   72322 host.go:66] Checking if "no-preload-504385" exists ...
	W0906 20:10:24.671080   72322 addons.go:243] addon metrics-server should already be in state true
	I0906 20:10:24.671112   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.671387   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671413   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671433   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671462   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671476   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671509   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.672599   72322 out.go:177] * Verifying Kubernetes components...
	I0906 20:10:24.674189   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:10:24.688494   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0906 20:10:24.689082   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.689564   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.689586   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.690020   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.690242   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.691753   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0906 20:10:24.691758   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0906 20:10:24.692223   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692314   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692744   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692761   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.692892   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692912   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.693162   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693498   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693821   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.693851   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694035   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694067   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694118   72322 addons.go:234] Setting addon default-storageclass=true in "no-preload-504385"
	W0906 20:10:24.694133   72322 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:10:24.694159   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.694503   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694533   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.710695   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0906 20:10:24.712123   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.712820   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.712844   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.713265   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.713488   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.714238   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0906 20:10:24.714448   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0906 20:10:24.714584   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.714801   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.715454   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715472   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715517   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.715631   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715643   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715961   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716468   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716527   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.717120   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.717170   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.717534   72322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:10:24.718838   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.719392   72322 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:24.719413   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:10:24.719435   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.720748   72322 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:10:22.717567   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:22.717827   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:24.722045   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:10:24.722066   72322 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:10:24.722084   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.722722   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723383   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.723408   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723545   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.723788   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.723970   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.724133   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.725538   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.725987   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.726006   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.726137   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.726317   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.726499   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.726629   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.734236   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0906 20:10:24.734597   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.735057   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.735069   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.735479   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.735612   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.737446   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.737630   72322 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:24.737647   72322 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:10:24.737658   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.740629   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741040   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.741063   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741251   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.741418   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.741530   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.741659   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.903190   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:10:24.944044   72322 node_ready.go:35] waiting up to 6m0s for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960395   72322 node_ready.go:49] node "no-preload-504385" has status "Ready":"True"
	I0906 20:10:24.960436   72322 node_ready.go:38] duration metric: took 16.357022ms for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960453   72322 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:24.981153   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:25.103072   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:25.113814   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:10:25.113843   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:10:25.123206   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:25.209178   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:10:25.209208   72322 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:10:25.255577   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.255604   72322 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:10:25.297179   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.336592   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336615   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.336915   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.336930   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.336938   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336945   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.337164   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.337178   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.350330   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.350356   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.350630   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.350648   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850349   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850377   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850688   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.850707   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850717   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850725   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850974   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.851012   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.033886   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.033918   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034215   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034221   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034241   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034250   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.034258   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034525   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034533   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034579   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034593   72322 addons.go:475] Verifying addon metrics-server=true in "no-preload-504385"
	I0906 20:10:26.036358   72322 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0906 20:10:26.037927   72322 addons.go:510] duration metric: took 1.367055829s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0906 20:10:26.989945   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:28.987386   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:28.987407   72322 pod_ready.go:82] duration metric: took 4.006228588s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:28.987419   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:30.994020   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:32.999308   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:32.999332   72322 pod_ready.go:82] duration metric: took 4.01190401s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:32.999344   72322 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005872   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.005898   72322 pod_ready.go:82] duration metric: took 1.006546878s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005908   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010279   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.010306   72322 pod_ready.go:82] duration metric: took 4.391154ms for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010315   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014331   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.014346   72322 pod_ready.go:82] duration metric: took 4.025331ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014354   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018361   72322 pod_ready.go:93] pod "kube-proxy-48s2x" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.018378   72322 pod_ready.go:82] duration metric: took 4.018525ms for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018386   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191606   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.191630   72322 pod_ready.go:82] duration metric: took 173.23777ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191638   72322 pod_ready.go:39] duration metric: took 9.231173272s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:34.191652   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:10:34.191738   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:10:34.207858   72322 api_server.go:72] duration metric: took 9.537052258s to wait for apiserver process to appear ...
	I0906 20:10:34.207883   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:10:34.207904   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:10:34.214477   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:10:34.216178   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:10:34.216211   72322 api_server.go:131] duration metric: took 8.319856ms to wait for apiserver health ...
	I0906 20:10:34.216221   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:10:34.396409   72322 system_pods.go:59] 9 kube-system pods found
	I0906 20:10:34.396443   72322 system_pods.go:61] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.396451   72322 system_pods.go:61] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.396456   72322 system_pods.go:61] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.396461   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.396468   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.396472   72322 system_pods.go:61] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.396477   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.396487   72322 system_pods.go:61] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.396502   72322 system_pods.go:61] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.396514   72322 system_pods.go:74] duration metric: took 180.284785ms to wait for pod list to return data ...
	I0906 20:10:34.396526   72322 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:10:34.592160   72322 default_sa.go:45] found service account: "default"
	I0906 20:10:34.592186   72322 default_sa.go:55] duration metric: took 195.651674ms for default service account to be created ...
	I0906 20:10:34.592197   72322 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:10:34.795179   72322 system_pods.go:86] 9 kube-system pods found
	I0906 20:10:34.795210   72322 system_pods.go:89] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.795217   72322 system_pods.go:89] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.795221   72322 system_pods.go:89] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.795224   72322 system_pods.go:89] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.795228   72322 system_pods.go:89] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.795232   72322 system_pods.go:89] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.795238   72322 system_pods.go:89] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.795244   72322 system_pods.go:89] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.795249   72322 system_pods.go:89] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.795258   72322 system_pods.go:126] duration metric: took 203.05524ms to wait for k8s-apps to be running ...
	I0906 20:10:34.795270   72322 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:10:34.795328   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:34.810406   72322 system_svc.go:56] duration metric: took 15.127486ms WaitForService to wait for kubelet
	I0906 20:10:34.810437   72322 kubeadm.go:582] duration metric: took 10.13963577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:10:34.810461   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:10:34.993045   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:10:34.993077   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:10:34.993092   72322 node_conditions.go:105] duration metric: took 182.626456ms to run NodePressure ...
	I0906 20:10:34.993105   72322 start.go:241] waiting for startup goroutines ...
	I0906 20:10:34.993112   72322 start.go:246] waiting for cluster config update ...
	I0906 20:10:34.993122   72322 start.go:255] writing updated cluster config ...
	I0906 20:10:34.993401   72322 ssh_runner.go:195] Run: rm -f paused
	I0906 20:10:35.043039   72322 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:10:35.045782   72322 out.go:177] * Done! kubectl is now configured to use "no-preload-504385" cluster and "default" namespace by default
	I0906 20:11:02.719781   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:02.720062   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:02.720077   73230 kubeadm.go:310] 
	I0906 20:11:02.720125   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:11:02.720177   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:11:02.720189   73230 kubeadm.go:310] 
	I0906 20:11:02.720246   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:11:02.720290   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:11:02.720443   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:11:02.720469   73230 kubeadm.go:310] 
	I0906 20:11:02.720593   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:11:02.720665   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:11:02.720722   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:11:02.720746   73230 kubeadm.go:310] 
	I0906 20:11:02.720900   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:11:02.721018   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:11:02.721028   73230 kubeadm.go:310] 
	I0906 20:11:02.721180   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:11:02.721311   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:11:02.721405   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:11:02.721500   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:11:02.721512   73230 kubeadm.go:310] 
	I0906 20:11:02.722088   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:11:02.722199   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:11:02.722310   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0906 20:11:02.722419   73230 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 20:11:02.722469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:11:03.188091   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:11:03.204943   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:11:03.215434   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:11:03.215458   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:11:03.215506   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:11:03.225650   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:11:03.225713   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:11:03.236252   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:11:03.245425   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:11:03.245489   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:11:03.255564   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.264932   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:11:03.265014   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.274896   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:11:03.284027   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:11:03.284092   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:11:03.294368   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:11:03.377411   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:11:03.377509   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:11:03.537331   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:11:03.537590   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:11:03.537722   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:11:03.728458   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:11:03.730508   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:11:03.730621   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:11:03.730720   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:11:03.730869   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:11:03.730984   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:11:03.731082   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:11:03.731167   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:11:03.731258   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:11:03.731555   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:11:03.731896   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:11:03.732663   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:11:03.732953   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:11:03.733053   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:11:03.839927   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:11:03.988848   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:11:04.077497   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:11:04.213789   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:11:04.236317   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:11:04.237625   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:11:04.237719   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:11:04.399036   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:11:04.400624   73230 out.go:235]   - Booting up control plane ...
	I0906 20:11:04.400709   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:11:04.401417   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:11:04.402751   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:11:04.404122   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:11:04.407817   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:11:44.410273   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:11:44.410884   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:44.411132   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:49.411428   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:49.411674   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:59.412917   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:59.413182   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:19.414487   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:19.414692   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415457   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:59.415729   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415750   73230 kubeadm.go:310] 
	I0906 20:12:59.415808   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:12:59.415864   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:12:59.415874   73230 kubeadm.go:310] 
	I0906 20:12:59.415933   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:12:59.415979   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:12:59.416147   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:12:59.416167   73230 kubeadm.go:310] 
	I0906 20:12:59.416332   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:12:59.416372   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:12:59.416420   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:12:59.416428   73230 kubeadm.go:310] 
	I0906 20:12:59.416542   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:12:59.416650   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:12:59.416659   73230 kubeadm.go:310] 
	I0906 20:12:59.416818   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:12:59.416928   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:12:59.417030   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:12:59.417139   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:12:59.417153   73230 kubeadm.go:310] 
	I0906 20:12:59.417400   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:12:59.417485   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:12:59.417559   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 20:12:59.417626   73230 kubeadm.go:394] duration metric: took 8m3.018298427s to StartCluster
	I0906 20:12:59.417673   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:12:59.417741   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:12:59.464005   73230 cri.go:89] found id: ""
	I0906 20:12:59.464033   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.464040   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:12:59.464045   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:12:59.464101   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:12:59.504218   73230 cri.go:89] found id: ""
	I0906 20:12:59.504252   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.504264   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:12:59.504271   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:12:59.504327   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:12:59.541552   73230 cri.go:89] found id: ""
	I0906 20:12:59.541579   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.541589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:12:59.541596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:12:59.541663   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:12:59.580135   73230 cri.go:89] found id: ""
	I0906 20:12:59.580158   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.580168   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:12:59.580174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:12:59.580220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:12:59.622453   73230 cri.go:89] found id: ""
	I0906 20:12:59.622486   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.622498   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:12:59.622518   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:12:59.622587   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:12:59.661561   73230 cri.go:89] found id: ""
	I0906 20:12:59.661590   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.661601   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:12:59.661608   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:12:59.661668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:12:59.695703   73230 cri.go:89] found id: ""
	I0906 20:12:59.695732   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.695742   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:12:59.695749   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:12:59.695808   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:12:59.739701   73230 cri.go:89] found id: ""
	I0906 20:12:59.739733   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.739744   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:12:59.739756   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:12:59.739771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:12:59.791400   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:12:59.791428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:12:59.851142   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:12:59.851179   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:12:59.867242   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:12:59.867278   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:12:59.941041   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:12:59.941060   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:12:59.941071   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0906 20:13:00.061377   73230 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 20:13:00.061456   73230 out.go:270] * 
	W0906 20:13:00.061515   73230 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.061532   73230 out.go:270] * 
	W0906 20:13:00.062343   73230 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:13:00.065723   73230 out.go:201] 
	W0906 20:13:00.066968   73230 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.067028   73230 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 20:13:00.067059   73230 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 20:13:00.068497   73230 out.go:201] 
	
	
	==> CRI-O <==
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.676977706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653903676954834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfd379ac-cce6-46a8-b798-27a3debb26f9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.677466961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec665690-f32d-4701-9279-de510870b8e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.677516891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec665690-f32d-4701-9279-de510870b8e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.677704620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea,PodSandboxId:ed8b0ac0ccfab7815363049809c3b4d30150855a7effa6522b8e64e7a0abb248,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653353617080301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51644de2-a533-44ec-8e7e-4842e80a896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8,PodSandboxId:88db6addd475cc2829a38b167389c9a5fd92e007133f926fb1f77511e57bd0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653353045978239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-br45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9992e3-3e5f-437d-90e0-b1087dca42e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204,PodSandboxId:980c51c1efd8837a63f4de3db4b86192367e7d8fd78b00db87351c867c895fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653352927545519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gtlxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
806a981-e9dc-46ec-b440-94ea611c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70,PodSandboxId:d5f43957a49278340fcb415e458015e5299fc1a163728abd2c58b2033c4c7b0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1725653352090981196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzx2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e52ab6-7d95-4a7a-acfa-66bbc748d1db,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83,PodSandboxId:a0209b1658f52d092727669d36fcfc48b6e96a1b37d322083df75347c56f63f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653341088003524,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7b22e239d297d4a55de7cc9009cb12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e,PodSandboxId:7adbf46638a3d35a4b64d98021e9e72559a95abe03a7c3b94b28e44ebcb862a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653341112222954,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b658c82eb54e0d4714bca5ecca195e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672,PodSandboxId:d4878d4eed5725cfba45f4570d494a40f897e1f9f97006eabb3bc2ebf3929027,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653341080464863,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0,PodSandboxId:4db3c3f431502ba8a5d68f13b529f6e72e809cd684591e80d0f3d90afbd8b79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653340964873531,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f21fbbd8883e745450e735168ec000,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d,PodSandboxId:76d622d673f720b37ba2d548dfa02bdc922e9b7fb74ef36e6b7090d1af4a88bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653055116048356,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec665690-f32d-4701-9279-de510870b8e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.717246366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a592c9d-b861-4764-b0e4-64a3cdf41582 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.717321176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a592c9d-b861-4764-b0e4-64a3cdf41582 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.718242870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fb92850-f5ef-4004-b452-9e2089104ea5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.718723601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653903718696613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fb92850-f5ef-4004-b452-9e2089104ea5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.719219508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27c01511-d859-47b5-9c8e-51e0419f77ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.719289466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27c01511-d859-47b5-9c8e-51e0419f77ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.719663499Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea,PodSandboxId:ed8b0ac0ccfab7815363049809c3b4d30150855a7effa6522b8e64e7a0abb248,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653353617080301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51644de2-a533-44ec-8e7e-4842e80a896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8,PodSandboxId:88db6addd475cc2829a38b167389c9a5fd92e007133f926fb1f77511e57bd0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653353045978239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-br45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9992e3-3e5f-437d-90e0-b1087dca42e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204,PodSandboxId:980c51c1efd8837a63f4de3db4b86192367e7d8fd78b00db87351c867c895fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653352927545519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gtlxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
806a981-e9dc-46ec-b440-94ea611c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70,PodSandboxId:d5f43957a49278340fcb415e458015e5299fc1a163728abd2c58b2033c4c7b0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1725653352090981196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzx2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e52ab6-7d95-4a7a-acfa-66bbc748d1db,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83,PodSandboxId:a0209b1658f52d092727669d36fcfc48b6e96a1b37d322083df75347c56f63f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653341088003524,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7b22e239d297d4a55de7cc9009cb12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e,PodSandboxId:7adbf46638a3d35a4b64d98021e9e72559a95abe03a7c3b94b28e44ebcb862a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653341112222954,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b658c82eb54e0d4714bca5ecca195e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672,PodSandboxId:d4878d4eed5725cfba45f4570d494a40f897e1f9f97006eabb3bc2ebf3929027,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653341080464863,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0,PodSandboxId:4db3c3f431502ba8a5d68f13b529f6e72e809cd684591e80d0f3d90afbd8b79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653340964873531,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f21fbbd8883e745450e735168ec000,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d,PodSandboxId:76d622d673f720b37ba2d548dfa02bdc922e9b7fb74ef36e6b7090d1af4a88bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653055116048356,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27c01511-d859-47b5-9c8e-51e0419f77ab name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.758524625Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5682eb58-aa7f-45f5-a43d-de01bdaca128 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.758597393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5682eb58-aa7f-45f5-a43d-de01bdaca128 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.759517078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c76e850-f6ab-4b20-8fb6-8f5249f2c9db name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.760086672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653903760061942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c76e850-f6ab-4b20-8fb6-8f5249f2c9db name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.760547886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87b80f88-ea85-41f2-83ca-5644e30e82ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.760617690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87b80f88-ea85-41f2-83ca-5644e30e82ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.760876035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea,PodSandboxId:ed8b0ac0ccfab7815363049809c3b4d30150855a7effa6522b8e64e7a0abb248,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653353617080301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51644de2-a533-44ec-8e7e-4842e80a896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8,PodSandboxId:88db6addd475cc2829a38b167389c9a5fd92e007133f926fb1f77511e57bd0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653353045978239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-br45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9992e3-3e5f-437d-90e0-b1087dca42e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204,PodSandboxId:980c51c1efd8837a63f4de3db4b86192367e7d8fd78b00db87351c867c895fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653352927545519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gtlxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
806a981-e9dc-46ec-b440-94ea611c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70,PodSandboxId:d5f43957a49278340fcb415e458015e5299fc1a163728abd2c58b2033c4c7b0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1725653352090981196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzx2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e52ab6-7d95-4a7a-acfa-66bbc748d1db,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83,PodSandboxId:a0209b1658f52d092727669d36fcfc48b6e96a1b37d322083df75347c56f63f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653341088003524,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7b22e239d297d4a55de7cc9009cb12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e,PodSandboxId:7adbf46638a3d35a4b64d98021e9e72559a95abe03a7c3b94b28e44ebcb862a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653341112222954,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b658c82eb54e0d4714bca5ecca195e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672,PodSandboxId:d4878d4eed5725cfba45f4570d494a40f897e1f9f97006eabb3bc2ebf3929027,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653341080464863,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0,PodSandboxId:4db3c3f431502ba8a5d68f13b529f6e72e809cd684591e80d0f3d90afbd8b79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653340964873531,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f21fbbd8883e745450e735168ec000,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d,PodSandboxId:76d622d673f720b37ba2d548dfa02bdc922e9b7fb74ef36e6b7090d1af4a88bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653055116048356,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87b80f88-ea85-41f2-83ca-5644e30e82ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.794251459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29049b03-335e-4397-a49d-0249183f790d name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.794359262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29049b03-335e-4397-a49d-0249183f790d name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.795671521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f7bc4d4-2352-4b82-b5af-e668633c6185 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.796157023Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653903796114299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f7bc4d4-2352-4b82-b5af-e668633c6185 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.796720628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9871fd76-3424-488c-96ae-a60a21cb27bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.796821792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9871fd76-3424-488c-96ae-a60a21cb27bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:23 embed-certs-458066 crio[708]: time="2024-09-06 20:18:23.797009220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea,PodSandboxId:ed8b0ac0ccfab7815363049809c3b4d30150855a7effa6522b8e64e7a0abb248,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653353617080301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51644de2-a533-44ec-8e7e-4842e80a896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8,PodSandboxId:88db6addd475cc2829a38b167389c9a5fd92e007133f926fb1f77511e57bd0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653353045978239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-br45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9992e3-3e5f-437d-90e0-b1087dca42e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204,PodSandboxId:980c51c1efd8837a63f4de3db4b86192367e7d8fd78b00db87351c867c895fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653352927545519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gtlxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
806a981-e9dc-46ec-b440-94ea611c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70,PodSandboxId:d5f43957a49278340fcb415e458015e5299fc1a163728abd2c58b2033c4c7b0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1725653352090981196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzx2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e52ab6-7d95-4a7a-acfa-66bbc748d1db,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83,PodSandboxId:a0209b1658f52d092727669d36fcfc48b6e96a1b37d322083df75347c56f63f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653341088003524,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7b22e239d297d4a55de7cc9009cb12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e,PodSandboxId:7adbf46638a3d35a4b64d98021e9e72559a95abe03a7c3b94b28e44ebcb862a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653341112222954,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b658c82eb54e0d4714bca5ecca195e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672,PodSandboxId:d4878d4eed5725cfba45f4570d494a40f897e1f9f97006eabb3bc2ebf3929027,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653341080464863,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0,PodSandboxId:4db3c3f431502ba8a5d68f13b529f6e72e809cd684591e80d0f3d90afbd8b79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653340964873531,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f21fbbd8883e745450e735168ec000,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d,PodSandboxId:76d622d673f720b37ba2d548dfa02bdc922e9b7fb74ef36e6b7090d1af4a88bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653055116048356,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9871fd76-3424-488c-96ae-a60a21cb27bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	20a310412e4fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ed8b0ac0ccfab       storage-provisioner
	5dca79959ab05       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   88db6addd475c       coredns-6f6b679f8f-br45p
	fd5c25ddf467f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   980c51c1efd88       coredns-6f6b679f8f-gtlxq
	f743811765445       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   d5f43957a4927       kube-proxy-rzx2f
	5f30c0a5d7a13       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   7adbf46638a3d       etcd-embed-certs-458066
	0967ba02d3556       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   a0209b1658f52       kube-scheduler-embed-certs-458066
	3c4dcf1da46f8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   d4878d4eed572       kube-apiserver-embed-certs-458066
	0a869559af2c6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   4db3c3f431502       kube-controller-manager-embed-certs-458066
	6b9354d01c92c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   76d622d673f72       kube-apiserver-embed-certs-458066
	
	
	==> coredns [5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-458066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-458066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=embed-certs-458066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T20_09_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 20:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-458066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 20:18:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 20:14:22 +0000   Fri, 06 Sep 2024 20:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 20:14:22 +0000   Fri, 06 Sep 2024 20:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 20:14:22 +0000   Fri, 06 Sep 2024 20:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 20:14:22 +0000   Fri, 06 Sep 2024 20:09:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    embed-certs-458066
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c773c140511b4e9ca1fd1ead399a4e72
	  System UUID:                c773c140-511b-4e9c-a1fd-1ead399a4e72
	  Boot ID:                    2dadd490-81d8-412f-9cc4-b0b6e2179136
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-br45p                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-6f6b679f8f-gtlxq                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-embed-certs-458066                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-embed-certs-458066             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-458066    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-rzx2f                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-embed-certs-458066             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-74kzz               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m11s  kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node embed-certs-458066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node embed-certs-458066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node embed-certs-458066 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s  node-controller  Node embed-certs-458066 event: Registered Node embed-certs-458066 in Controller
	
	
	==> dmesg <==
	[  +0.050295] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040174] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.780626] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.467512] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.620471] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 20:04] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.057434] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056663] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.194576] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.120122] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.293621] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[  +4.235437] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +1.956944] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.060731] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.544535] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.758269] kauditd_printk_skb: 87 callbacks suppressed
	[Sep 6 20:08] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.314390] systemd-fstab-generator[2549]: Ignoring "noauto" option for root device
	[Sep 6 20:09] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.660149] systemd-fstab-generator[2870]: Ignoring "noauto" option for root device
	[  +5.390436] systemd-fstab-generator[2983]: Ignoring "noauto" option for root device
	[  +0.124985] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.145270] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e] <==
	{"level":"info","ts":"2024-09-06T20:09:01.481636Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T20:09:01.481907Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-09-06T20:09:01.481951Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-09-06T20:09:01.483031Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"86c29206b457f123","initial-advertise-peer-urls":["https://192.168.39.118:2380"],"listen-peer-urls":["https://192.168.39.118:2380"],"advertise-client-urls":["https://192.168.39.118:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.118:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T20:09:01.483749Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T20:09:01.904850Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-06T20:09:01.905000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-06T20:09:01.905058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgPreVoteResp from 86c29206b457f123 at term 1"}
	{"level":"info","ts":"2024-09-06T20:09:01.905094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:01.905118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgVoteResp from 86c29206b457f123 at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:01.905151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became leader at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:01.905176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86c29206b457f123 elected leader 86c29206b457f123 at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:01.908086Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"86c29206b457f123","local-member-attributes":"{Name:embed-certs-458066 ClientURLs:[https://192.168.39.118:2379]}","request-path":"/0/members/86c29206b457f123/attributes","cluster-id":"56e4fbef5627b38f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T20:09:01.909829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:09:01.909856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:09:01.923157Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:09:01.909951Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:01.925691Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T20:09:01.926901Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:09:01.927157Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T20:09:01.927225Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T20:09:01.928282Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.118:2379"}
	{"level":"info","ts":"2024-09-06T20:09:01.930272Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"56e4fbef5627b38f","local-member-id":"86c29206b457f123","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:01.930456Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:01.930507Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:18:24 up 14 min,  0 users,  load average: 0.09, 0.14, 0.10
	Linux embed-certs-458066 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672] <==
	W0906 20:14:04.722433       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:14:04.722525       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:14:04.723696       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:14:04.723800       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:15:04.724457       1 handler_proxy.go:99] no RequestInfo found in the context
	W0906 20:15:04.724482       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:15:04.724509       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0906 20:15:04.724558       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:15:04.725671       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:15:04.725816       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:17:04.726490       1 handler_proxy.go:99] no RequestInfo found in the context
	W0906 20:17:04.726940       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:17:04.727024       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0906 20:17:04.727059       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:17:04.728229       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:17:04.728319       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d] <==
	W0906 20:08:54.776538       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.804214       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.861208       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.868896       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.920708       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.924275       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.943381       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.969254       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.970864       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.029157       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.054970       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.087108       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.093599       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.135923       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.137296       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.179697       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.192291       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.403973       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.505681       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.542929       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.551391       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.614851       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.670380       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.744727       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.848199       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0] <==
	E0906 20:13:10.619749       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:13:11.149888       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:13:40.626143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:13:41.157657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:14:10.632354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:14:11.165156       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:14:22.748986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-458066"
	E0906 20:14:40.638846       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:14:41.173933       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:15:10.646276       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:15:11.182479       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:15:19.462099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="262.646µs"
	I0906 20:15:33.463612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="99.784µs"
	E0906 20:15:40.652497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:15:41.191418       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:16:10.659543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:16:11.201015       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:16:40.665395       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:16:41.210075       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:17:10.672611       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:17:11.218545       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:17:40.681520       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:17:41.228229       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:18:10.687844       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:18:11.237343       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 20:09:12.604217       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 20:09:12.616720       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.118"]
	E0906 20:09:12.618914       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 20:09:12.699347       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 20:09:12.699396       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 20:09:12.699431       1 server_linux.go:169] "Using iptables Proxier"
	I0906 20:09:12.712997       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 20:09:12.713323       1 server.go:483] "Version info" version="v1.31.0"
	I0906 20:09:12.713341       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:09:12.714825       1 config.go:197] "Starting service config controller"
	I0906 20:09:12.714851       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 20:09:12.714877       1 config.go:104] "Starting endpoint slice config controller"
	I0906 20:09:12.714882       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 20:09:12.715658       1 config.go:326] "Starting node config controller"
	I0906 20:09:12.715669       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 20:09:12.820932       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 20:09:12.820977       1 shared_informer.go:320] Caches are synced for node config
	I0906 20:09:12.821007       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83] <==
	W0906 20:09:04.594040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 20:09:04.594232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.595431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:09:04.595466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.605043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 20:09:04.605076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.613026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:04.613059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.642978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 20:09:04.643042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.685155       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 20:09:04.685211       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0906 20:09:04.728309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 20:09:04.728369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.747629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:04.747813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.852053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:04.852454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.860006       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 20:09:04.860149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.972957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 20:09:04.973159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:05.030703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:05.031086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0906 20:09:06.937005       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 20:17:14 embed-certs-458066 kubelet[2877]: E0906 20:17:14.447343    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:17:16 embed-certs-458066 kubelet[2877]: E0906 20:17:16.582431    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653836581856458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:16 embed-certs-458066 kubelet[2877]: E0906 20:17:16.582480    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653836581856458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:25 embed-certs-458066 kubelet[2877]: E0906 20:17:25.448207    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:17:26 embed-certs-458066 kubelet[2877]: E0906 20:17:26.583544    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653846583315420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:26 embed-certs-458066 kubelet[2877]: E0906 20:17:26.583568    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653846583315420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:36 embed-certs-458066 kubelet[2877]: E0906 20:17:36.585532    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653856585199157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:36 embed-certs-458066 kubelet[2877]: E0906 20:17:36.585904    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653856585199157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:38 embed-certs-458066 kubelet[2877]: E0906 20:17:38.446310    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:17:46 embed-certs-458066 kubelet[2877]: E0906 20:17:46.588151    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653866587747625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:46 embed-certs-458066 kubelet[2877]: E0906 20:17:46.588216    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653866587747625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:50 embed-certs-458066 kubelet[2877]: E0906 20:17:50.447803    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:17:56 embed-certs-458066 kubelet[2877]: E0906 20:17:56.590369    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653876589961155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:56 embed-certs-458066 kubelet[2877]: E0906 20:17:56.590704    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653876589961155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:02 embed-certs-458066 kubelet[2877]: E0906 20:18:02.448044    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:18:06 embed-certs-458066 kubelet[2877]: E0906 20:18:06.464568    2877 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 20:18:06 embed-certs-458066 kubelet[2877]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 20:18:06 embed-certs-458066 kubelet[2877]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 20:18:06 embed-certs-458066 kubelet[2877]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 20:18:06 embed-certs-458066 kubelet[2877]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 20:18:06 embed-certs-458066 kubelet[2877]: E0906 20:18:06.593005    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653886592309573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:06 embed-certs-458066 kubelet[2877]: E0906 20:18:06.593035    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653886592309573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:14 embed-certs-458066 kubelet[2877]: E0906 20:18:14.447271    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:18:16 embed-certs-458066 kubelet[2877]: E0906 20:18:16.594412    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653896594103878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:16 embed-certs-458066 kubelet[2877]: E0906 20:18:16.594722    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653896594103878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea] <==
	I0906 20:09:13.742019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 20:09:13.755050       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 20:09:13.755265       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 20:09:13.769441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 20:09:13.769721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-458066_5b9d34f8-d4c0-47e9-8998-bdb11653cc78!
	I0906 20:09:13.776591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3138066-1db7-4a57-be2d-23292dc46eb3", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-458066_5b9d34f8-d4c0-47e9-8998-bdb11653cc78 became leader
	I0906 20:09:13.871167       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-458066_5b9d34f8-d4c0-47e9-8998-bdb11653cc78!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-458066 -n embed-certs-458066
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-458066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-74kzz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-458066 describe pod metrics-server-6867b74b74-74kzz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-458066 describe pod metrics-server-6867b74b74-74kzz: exit status 1 (63.414299ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-74kzz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-458066 describe pod metrics-server-6867b74b74-74kzz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0906 20:09:47.257028   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:09:49.184583   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:09:58.425841   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-06 20:18:42.838309991 +0000 UTC m=+6569.328063529
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-653828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-653828 logs -n 25: (2.077455317s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-603826 sudo cat                              | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo find                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo crio                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-603826                                       | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-859361 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | disable-driver-mounts-859361                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:57 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-504385             | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-458066            | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653828  | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC | 06 Sep 24 19:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC |                     |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-504385                  | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-458066                 | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-843298        | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653828       | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-843298             | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 20:00:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:00:55.455816   73230 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:00:55.455933   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.455943   73230 out.go:358] Setting ErrFile to fd 2...
	I0906 20:00:55.455951   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.456141   73230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:00:55.456685   73230 out.go:352] Setting JSON to false
	I0906 20:00:55.457698   73230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6204,"bootTime":1725646651,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:00:55.457762   73230 start.go:139] virtualization: kvm guest
	I0906 20:00:55.459863   73230 out.go:177] * [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:00:55.461119   73230 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:00:55.461167   73230 notify.go:220] Checking for updates...
	I0906 20:00:55.463398   73230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:00:55.464573   73230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:00:55.465566   73230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:00:55.466605   73230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:00:55.467834   73230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:00:55.469512   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:00:55.470129   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.470183   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.484881   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0906 20:00:55.485238   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.485752   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.485776   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.486108   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.486296   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.488175   73230 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 20:00:55.489359   73230 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:00:55.489671   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.489705   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.504589   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0906 20:00:55.505047   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.505557   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.505581   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.505867   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.506018   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.541116   73230 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 20:00:55.542402   73230 start.go:297] selected driver: kvm2
	I0906 20:00:55.542423   73230 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-8
43298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.542548   73230 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:00:55.543192   73230 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.543257   73230 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:00:55.558465   73230 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:00:55.558833   73230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:00:55.558865   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:00:55.558875   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:00:55.558908   73230 start.go:340] cluster config:
	{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.559011   73230 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.561521   73230 out.go:177] * Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	I0906 20:00:55.309027   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:58.377096   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:55.562714   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:00:55.562760   73230 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:00:55.562773   73230 cache.go:56] Caching tarball of preloaded images
	I0906 20:00:55.562856   73230 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:00:55.562868   73230 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0906 20:00:55.562977   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:00:55.563173   73230 start.go:360] acquireMachinesLock for old-k8s-version-843298: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:01:04.457122   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:07.529093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:13.609120   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:16.681107   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:22.761164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:25.833123   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:31.913167   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:34.985108   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:41.065140   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:44.137176   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:50.217162   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:53.289137   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:59.369093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:02.441171   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:08.521164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:11.593164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:17.673124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:20.745159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:26.825154   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:29.897211   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:35.977181   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:39.049161   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:45.129172   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:48.201208   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:54.281103   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:57.353175   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:03.433105   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:06.505124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:12.585121   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:15.657169   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:21.737151   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:24.809135   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:30.889180   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:33.961145   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:40.041159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:43.113084   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:46.117237   72441 start.go:364] duration metric: took 4m28.485189545s to acquireMachinesLock for "embed-certs-458066"
	I0906 20:03:46.117298   72441 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:03:46.117309   72441 fix.go:54] fixHost starting: 
	I0906 20:03:46.117737   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:03:46.117773   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:03:46.132573   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0906 20:03:46.133029   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:03:46.133712   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:03:46.133743   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:03:46.134097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:03:46.134322   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:03:46.134505   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:03:46.136291   72441 fix.go:112] recreateIfNeeded on embed-certs-458066: state=Stopped err=<nil>
	I0906 20:03:46.136313   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	W0906 20:03:46.136466   72441 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:03:46.138544   72441 out.go:177] * Restarting existing kvm2 VM for "embed-certs-458066" ...
	I0906 20:03:46.139833   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Start
	I0906 20:03:46.140001   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring networks are active...
	I0906 20:03:46.140754   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network default is active
	I0906 20:03:46.141087   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network mk-embed-certs-458066 is active
	I0906 20:03:46.141402   72441 main.go:141] libmachine: (embed-certs-458066) Getting domain xml...
	I0906 20:03:46.142202   72441 main.go:141] libmachine: (embed-certs-458066) Creating domain...
	I0906 20:03:47.351460   72441 main.go:141] libmachine: (embed-certs-458066) Waiting to get IP...
	I0906 20:03:47.352248   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.352628   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.352699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.352597   73827 retry.go:31] will retry after 202.870091ms: waiting for machine to come up
	I0906 20:03:46.114675   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:03:46.114711   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115092   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:03:46.115118   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115306   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:03:46.117092   72322 machine.go:96] duration metric: took 4m37.429712277s to provisionDockerMachine
	I0906 20:03:46.117135   72322 fix.go:56] duration metric: took 4m37.451419912s for fixHost
	I0906 20:03:46.117144   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 4m37.45145595s
	W0906 20:03:46.117167   72322 start.go:714] error starting host: provision: host is not running
	W0906 20:03:46.117242   72322 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0906 20:03:46.117252   72322 start.go:729] Will try again in 5 seconds ...
	I0906 20:03:47.557228   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.557656   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.557682   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.557606   73827 retry.go:31] will retry after 357.664781ms: waiting for machine to come up
	I0906 20:03:47.917575   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.918041   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.918068   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.918005   73827 retry.go:31] will retry after 338.480268ms: waiting for machine to come up
	I0906 20:03:48.258631   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.259269   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.259305   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.259229   73827 retry.go:31] will retry after 554.173344ms: waiting for machine to come up
	I0906 20:03:48.814947   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.815491   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.815523   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.815449   73827 retry.go:31] will retry after 601.029419ms: waiting for machine to come up
	I0906 20:03:49.418253   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:49.418596   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:49.418623   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:49.418548   73827 retry.go:31] will retry after 656.451458ms: waiting for machine to come up
	I0906 20:03:50.076488   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:50.076908   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:50.076928   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:50.076875   73827 retry.go:31] will retry after 1.13800205s: waiting for machine to come up
	I0906 20:03:51.216380   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:51.216801   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:51.216831   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:51.216758   73827 retry.go:31] will retry after 1.071685673s: waiting for machine to come up
	I0906 20:03:52.289760   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:52.290174   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:52.290202   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:52.290125   73827 retry.go:31] will retry after 1.581761127s: waiting for machine to come up
	I0906 20:03:51.119269   72322 start.go:360] acquireMachinesLock for no-preload-504385: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:03:53.873755   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:53.874150   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:53.874184   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:53.874120   73827 retry.go:31] will retry after 1.99280278s: waiting for machine to come up
	I0906 20:03:55.869267   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:55.869747   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:55.869776   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:55.869685   73827 retry.go:31] will retry after 2.721589526s: waiting for machine to come up
	I0906 20:03:58.594012   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:58.594402   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:58.594428   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:58.594354   73827 retry.go:31] will retry after 2.763858077s: waiting for machine to come up
	I0906 20:04:01.359424   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:01.359775   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:04:01.359809   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:04:01.359736   73827 retry.go:31] will retry after 3.822567166s: waiting for machine to come up
	I0906 20:04:06.669858   72867 start.go:364] duration metric: took 4m9.363403512s to acquireMachinesLock for "default-k8s-diff-port-653828"
	I0906 20:04:06.669929   72867 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:06.669938   72867 fix.go:54] fixHost starting: 
	I0906 20:04:06.670353   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:06.670393   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:06.688290   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44215
	I0906 20:04:06.688752   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:06.689291   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:04:06.689314   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:06.689692   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:06.689886   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:06.690048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:04:06.691557   72867 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653828: state=Stopped err=<nil>
	I0906 20:04:06.691592   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	W0906 20:04:06.691742   72867 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:06.693924   72867 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653828" ...
	I0906 20:04:06.694965   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Start
	I0906 20:04:06.695148   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring networks are active...
	I0906 20:04:06.695900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network default is active
	I0906 20:04:06.696316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network mk-default-k8s-diff-port-653828 is active
	I0906 20:04:06.696698   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Getting domain xml...
	I0906 20:04:06.697469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Creating domain...
	I0906 20:04:05.186782   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187288   72441 main.go:141] libmachine: (embed-certs-458066) Found IP for machine: 192.168.39.118
	I0906 20:04:05.187301   72441 main.go:141] libmachine: (embed-certs-458066) Reserving static IP address...
	I0906 20:04:05.187340   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has current primary IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187764   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.187784   72441 main.go:141] libmachine: (embed-certs-458066) Reserved static IP address: 192.168.39.118
	I0906 20:04:05.187797   72441 main.go:141] libmachine: (embed-certs-458066) DBG | skip adding static IP to network mk-embed-certs-458066 - found existing host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"}
	I0906 20:04:05.187805   72441 main.go:141] libmachine: (embed-certs-458066) Waiting for SSH to be available...
	I0906 20:04:05.187848   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Getting to WaitForSSH function...
	I0906 20:04:05.190229   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190546   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.190576   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190643   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH client type: external
	I0906 20:04:05.190679   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa (-rw-------)
	I0906 20:04:05.190714   72441 main.go:141] libmachine: (embed-certs-458066) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:05.190727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | About to run SSH command:
	I0906 20:04:05.190761   72441 main.go:141] libmachine: (embed-certs-458066) DBG | exit 0
	I0906 20:04:05.317160   72441 main.go:141] libmachine: (embed-certs-458066) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:05.317483   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetConfigRaw
	I0906 20:04:05.318089   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.320559   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.320944   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.320971   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.321225   72441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/config.json ...
	I0906 20:04:05.321445   72441 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:05.321465   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:05.321720   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.323699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.323972   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.324009   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.324126   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.324303   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324444   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324561   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.324706   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.324940   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.324953   72441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:05.437192   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:05.437217   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437479   72441 buildroot.go:166] provisioning hostname "embed-certs-458066"
	I0906 20:04:05.437495   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437665   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.440334   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440705   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.440733   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440925   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.441100   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441260   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441405   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.441573   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.441733   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.441753   72441 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-458066 && echo "embed-certs-458066" | sudo tee /etc/hostname
	I0906 20:04:05.566958   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-458066
	
	I0906 20:04:05.566986   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.569652   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.569984   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.570014   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.570158   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.570342   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570504   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570648   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.570838   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.571042   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.571060   72441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-458066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-458066/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-458066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:05.689822   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:05.689855   72441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:05.689882   72441 buildroot.go:174] setting up certificates
	I0906 20:04:05.689891   72441 provision.go:84] configureAuth start
	I0906 20:04:05.689899   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.690182   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.692758   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693151   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.693172   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693308   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.695364   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.695754   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695909   72441 provision.go:143] copyHostCerts
	I0906 20:04:05.695957   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:05.695975   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:05.696042   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:05.696123   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:05.696130   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:05.696153   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:05.696248   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:05.696257   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:05.696280   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:05.696329   72441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-458066 san=[127.0.0.1 192.168.39.118 embed-certs-458066 localhost minikube]
	I0906 20:04:06.015593   72441 provision.go:177] copyRemoteCerts
	I0906 20:04:06.015656   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:06.015683   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.018244   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018598   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.018630   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018784   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.018990   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.019169   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.019278   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.110170   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 20:04:06.136341   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:06.161181   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:06.184758   72441 provision.go:87] duration metric: took 494.857261ms to configureAuth
	I0906 20:04:06.184786   72441 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:06.184986   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:06.185049   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.187564   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.187955   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.187978   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.188153   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.188399   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188571   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.188920   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.189070   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.189084   72441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:06.425480   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:06.425518   72441 machine.go:96] duration metric: took 1.104058415s to provisionDockerMachine
	I0906 20:04:06.425535   72441 start.go:293] postStartSetup for "embed-certs-458066" (driver="kvm2")
	I0906 20:04:06.425548   72441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:06.425572   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.425893   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:06.425919   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.428471   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428768   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.428794   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428928   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.429109   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.429283   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.429419   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.515180   72441 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:06.519357   72441 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:06.519390   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:06.519464   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:06.519540   72441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:06.519625   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:06.528542   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:06.552463   72441 start.go:296] duration metric: took 126.912829ms for postStartSetup
	I0906 20:04:06.552514   72441 fix.go:56] duration metric: took 20.435203853s for fixHost
	I0906 20:04:06.552540   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.554994   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555521   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.555556   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555739   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.555937   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556095   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556253   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.556409   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.556600   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.556613   72441 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:06.669696   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653046.632932221
	
	I0906 20:04:06.669720   72441 fix.go:216] guest clock: 1725653046.632932221
	I0906 20:04:06.669730   72441 fix.go:229] Guest: 2024-09-06 20:04:06.632932221 +0000 UTC Remote: 2024-09-06 20:04:06.552518521 +0000 UTC m=+289.061134864 (delta=80.4137ms)
	I0906 20:04:06.669761   72441 fix.go:200] guest clock delta is within tolerance: 80.4137ms
	I0906 20:04:06.669769   72441 start.go:83] releasing machines lock for "embed-certs-458066", held for 20.552490687s
	I0906 20:04:06.669801   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.670060   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:06.673015   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673405   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.673433   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673599   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674041   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674210   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674304   72441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:06.674351   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.674414   72441 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:06.674437   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.676916   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677063   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677314   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677341   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677481   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677513   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677686   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677691   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677864   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677878   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678013   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678025   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.678191   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.758176   72441 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:06.782266   72441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:06.935469   72441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:06.941620   72441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:06.941680   72441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:06.957898   72441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:06.957927   72441 start.go:495] detecting cgroup driver to use...
	I0906 20:04:06.957995   72441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:06.978574   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:06.993967   72441 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:06.994035   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:07.008012   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:07.022073   72441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:07.133622   72441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:07.291402   72441 docker.go:233] disabling docker service ...
	I0906 20:04:07.291478   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:07.306422   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:07.321408   72441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:07.442256   72441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:07.564181   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:07.579777   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:07.599294   72441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:07.599361   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.610457   72441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:07.610555   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.621968   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.633527   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.645048   72441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:07.659044   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.670526   72441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.689465   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.701603   72441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:07.712085   72441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:07.712144   72441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:07.728406   72441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:07.739888   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:07.862385   72441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:07.954721   72441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:07.954792   72441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:07.959478   72441 start.go:563] Will wait 60s for crictl version
	I0906 20:04:07.959545   72441 ssh_runner.go:195] Run: which crictl
	I0906 20:04:07.963893   72441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:08.003841   72441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:08.003917   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.032191   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.063563   72441 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:07.961590   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting to get IP...
	I0906 20:04:07.962441   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962859   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962923   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:07.962841   73982 retry.go:31] will retry after 292.508672ms: waiting for machine to come up
	I0906 20:04:08.257346   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257845   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257867   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.257815   73982 retry.go:31] will retry after 265.967606ms: waiting for machine to come up
	I0906 20:04:08.525352   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525907   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.525834   73982 retry.go:31] will retry after 308.991542ms: waiting for machine to come up
	I0906 20:04:08.836444   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837021   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837053   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.836973   73982 retry.go:31] will retry after 483.982276ms: waiting for machine to come up
	I0906 20:04:09.322661   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323161   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323184   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.323125   73982 retry.go:31] will retry after 574.860867ms: waiting for machine to come up
	I0906 20:04:09.899849   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900228   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.900187   73982 retry.go:31] will retry after 769.142372ms: waiting for machine to come up
	I0906 20:04:10.671316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671796   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671853   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:10.671771   73982 retry.go:31] will retry after 720.232224ms: waiting for machine to come up
	I0906 20:04:11.393120   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393502   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393534   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:11.393447   73982 retry.go:31] will retry after 975.812471ms: waiting for machine to come up
	I0906 20:04:08.064907   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:08.067962   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068410   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:08.068442   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068626   72441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:08.072891   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:08.086275   72441 kubeadm.go:883] updating cluster {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:08.086383   72441 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:08.086423   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:08.123100   72441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:08.123158   72441 ssh_runner.go:195] Run: which lz4
	I0906 20:04:08.127330   72441 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:08.131431   72441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:08.131466   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:09.584066   72441 crio.go:462] duration metric: took 1.456765631s to copy over tarball
	I0906 20:04:09.584131   72441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:11.751911   72441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.167751997s)
	I0906 20:04:11.751949   72441 crio.go:469] duration metric: took 2.167848466s to extract the tarball
	I0906 20:04:11.751959   72441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:11.790385   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:11.831973   72441 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:11.831995   72441 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:11.832003   72441 kubeadm.go:934] updating node { 192.168.39.118 8443 v1.31.0 crio true true} ...
	I0906 20:04:11.832107   72441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-458066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:11.832166   72441 ssh_runner.go:195] Run: crio config
	I0906 20:04:11.881946   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:11.881973   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:11.882000   72441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:11.882028   72441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-458066 NodeName:embed-certs-458066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:11.882186   72441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-458066"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:11.882266   72441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:11.892537   72441 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:11.892617   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:11.902278   72441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0906 20:04:11.920451   72441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:11.938153   72441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0906 20:04:11.957510   72441 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:11.961364   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:11.973944   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:12.109677   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:12.126348   72441 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066 for IP: 192.168.39.118
	I0906 20:04:12.126378   72441 certs.go:194] generating shared ca certs ...
	I0906 20:04:12.126399   72441 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:12.126562   72441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:12.126628   72441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:12.126642   72441 certs.go:256] generating profile certs ...
	I0906 20:04:12.126751   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/client.key
	I0906 20:04:12.126843   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key.c10a03b1
	I0906 20:04:12.126904   72441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key
	I0906 20:04:12.127063   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:12.127111   72441 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:12.127123   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:12.127153   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:12.127189   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:12.127218   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:12.127268   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:12.128117   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:12.185978   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:12.218124   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:12.254546   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:12.290098   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0906 20:04:12.317923   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:12.341186   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:12.363961   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:04:12.388000   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:12.418618   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:12.442213   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:12.465894   72441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:12.482404   72441 ssh_runner.go:195] Run: openssl version
	I0906 20:04:12.488370   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:12.499952   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504565   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504619   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.510625   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:12.522202   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:12.370306   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370743   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370779   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:12.370688   73982 retry.go:31] will retry after 1.559820467s: waiting for machine to come up
	I0906 20:04:13.932455   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933042   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933072   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:13.932985   73982 retry.go:31] will retry after 1.968766852s: waiting for machine to come up
	I0906 20:04:15.903304   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903826   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903855   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:15.903775   73982 retry.go:31] will retry after 2.738478611s: waiting for machine to come up
	I0906 20:04:12.533501   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538229   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538284   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.544065   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:12.555220   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:12.566402   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571038   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571093   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.577057   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:12.588056   72441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:12.592538   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:12.598591   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:12.604398   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:12.610502   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:12.616513   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:12.622859   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:12.628975   72441 kubeadm.go:392] StartCluster: {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:12.629103   72441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:12.629154   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.667699   72441 cri.go:89] found id: ""
	I0906 20:04:12.667764   72441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:12.678070   72441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:12.678092   72441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:12.678148   72441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:12.687906   72441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:12.688889   72441 kubeconfig.go:125] found "embed-certs-458066" server: "https://192.168.39.118:8443"
	I0906 20:04:12.690658   72441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:12.700591   72441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.118
	I0906 20:04:12.700623   72441 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:12.700635   72441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:12.700675   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.741471   72441 cri.go:89] found id: ""
	I0906 20:04:12.741553   72441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:12.757877   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:12.767729   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:12.767748   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:12.767800   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:12.777094   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:12.777157   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:12.786356   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:12.795414   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:12.795470   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:12.804727   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.813481   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:12.813534   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.822844   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:12.831877   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:12.831930   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:12.841082   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:12.850560   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:12.975888   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:13.850754   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.064392   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.140680   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.239317   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:14.239411   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:14.740313   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.240388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.740388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.755429   72441 api_server.go:72] duration metric: took 1.516111342s to wait for apiserver process to appear ...
	I0906 20:04:15.755462   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:15.755483   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.544772   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.544807   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.544824   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.596487   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.596546   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.755752   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.761917   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:18.761946   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.256512   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.265937   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.265973   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.756568   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.763581   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.763606   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:20.256237   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:20.262036   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:04:20.268339   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:20.268364   72441 api_server.go:131] duration metric: took 4.512894792s to wait for apiserver health ...
	I0906 20:04:20.268372   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:20.268378   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:20.270262   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:18.644597   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645056   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645088   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:18.644992   73982 retry.go:31] will retry after 2.982517528s: waiting for machine to come up
	I0906 20:04:21.631028   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631392   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631414   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:21.631367   73982 retry.go:31] will retry after 3.639469531s: waiting for machine to come up
	I0906 20:04:20.271474   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:20.282996   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:20.303957   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:20.315560   72441 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:20.315602   72441 system_pods.go:61] "coredns-6f6b679f8f-v6z7z" [b2c18dba-1210-4e95-a705-95abceca92f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:20.315611   72441 system_pods.go:61] "etcd-embed-certs-458066" [cf60e7c7-1801-42c7-be25-85242c22a5d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:20.315619   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [48c684ec-f93f-49ec-868b-6e7bc20ad506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:20.315625   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [1d55b520-2d8f-4517-a491-8193eaff5d89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:20.315631   72441 system_pods.go:61] "kube-proxy-crvq7" [f0610684-81ee-426a-adc2-aea80faab822] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:20.315639   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [d8744325-58f2-43a8-9a93-516b5a6fb989] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:20.315644   72441 system_pods.go:61] "metrics-server-6867b74b74-gtg94" [600e9c90-20db-407e-b586-fae3809d87b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:20.315649   72441 system_pods.go:61] "storage-provisioner" [1efe7188-2d33-4a29-afbe-823adbef73b3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:20.315657   72441 system_pods.go:74] duration metric: took 11.674655ms to wait for pod list to return data ...
	I0906 20:04:20.315665   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:20.318987   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:20.319012   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:20.319023   72441 node_conditions.go:105] duration metric: took 3.354197ms to run NodePressure ...
	I0906 20:04:20.319038   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:20.600925   72441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607562   72441 kubeadm.go:739] kubelet initialised
	I0906 20:04:20.607590   72441 kubeadm.go:740] duration metric: took 6.637719ms waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607602   72441 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:20.611592   72441 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:26.558023   73230 start.go:364] duration metric: took 3m30.994815351s to acquireMachinesLock for "old-k8s-version-843298"
	I0906 20:04:26.558087   73230 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:26.558096   73230 fix.go:54] fixHost starting: 
	I0906 20:04:26.558491   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:26.558542   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:26.576511   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0906 20:04:26.576933   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:26.577434   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:04:26.577460   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:26.577794   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:26.577968   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:26.578128   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetState
	I0906 20:04:26.579640   73230 fix.go:112] recreateIfNeeded on old-k8s-version-843298: state=Stopped err=<nil>
	I0906 20:04:26.579674   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	W0906 20:04:26.579829   73230 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:26.581843   73230 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-843298" ...
	I0906 20:04:25.275406   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275902   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Found IP for machine: 192.168.50.16
	I0906 20:04:25.275942   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has current primary IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275955   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserving static IP address...
	I0906 20:04:25.276431   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.276463   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserved static IP address: 192.168.50.16
	I0906 20:04:25.276482   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | skip adding static IP to network mk-default-k8s-diff-port-653828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"}
	I0906 20:04:25.276493   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for SSH to be available...
	I0906 20:04:25.276512   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Getting to WaitForSSH function...
	I0906 20:04:25.278727   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279006   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.279037   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279196   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH client type: external
	I0906 20:04:25.279234   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa (-rw-------)
	I0906 20:04:25.279289   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:25.279312   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | About to run SSH command:
	I0906 20:04:25.279330   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | exit 0
	I0906 20:04:25.405134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:25.405524   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetConfigRaw
	I0906 20:04:25.406134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.408667   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409044   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.409074   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409332   72867 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/config.json ...
	I0906 20:04:25.409513   72867 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:25.409530   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:25.409724   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.411737   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412027   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.412060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412171   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.412362   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412489   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412662   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.412802   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.413045   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.413059   72867 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:25.513313   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:25.513343   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513613   72867 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653828"
	I0906 20:04:25.513644   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513851   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.516515   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.516847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.516895   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.517116   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.517300   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517461   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517574   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.517712   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.517891   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.517905   72867 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653828 && echo "default-k8s-diff-port-653828" | sudo tee /etc/hostname
	I0906 20:04:25.637660   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653828
	
	I0906 20:04:25.637691   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.640258   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640600   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.640626   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640811   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.641001   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641177   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641333   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.641524   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.641732   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.641754   72867 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:25.749746   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:25.749773   72867 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:25.749795   72867 buildroot.go:174] setting up certificates
	I0906 20:04:25.749812   72867 provision.go:84] configureAuth start
	I0906 20:04:25.749828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.750111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.752528   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.752893   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.752920   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.753104   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.755350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755642   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.755666   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755808   72867 provision.go:143] copyHostCerts
	I0906 20:04:25.755858   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:25.755875   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:25.755930   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:25.756017   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:25.756024   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:25.756046   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:25.756129   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:25.756137   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:25.756155   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:25.756212   72867 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653828 san=[127.0.0.1 192.168.50.16 default-k8s-diff-port-653828 localhost minikube]
	I0906 20:04:25.934931   72867 provision.go:177] copyRemoteCerts
	I0906 20:04:25.935018   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:25.935060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.937539   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.937899   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.937925   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.938111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.938308   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.938469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.938644   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.019666   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:26.043989   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0906 20:04:26.066845   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:04:26.090526   72867 provision.go:87] duration metric: took 340.698646ms to configureAuth
	I0906 20:04:26.090561   72867 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:26.090786   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:26.090878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.093783   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094167   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.094201   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094503   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.094689   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094850   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094975   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.095130   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.095357   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.095389   72867 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:26.324270   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:26.324301   72867 machine.go:96] duration metric: took 914.775498ms to provisionDockerMachine
	I0906 20:04:26.324315   72867 start.go:293] postStartSetup for "default-k8s-diff-port-653828" (driver="kvm2")
	I0906 20:04:26.324328   72867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:26.324350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.324726   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:26.324759   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.327339   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327718   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.327750   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.328147   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.328309   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.328449   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.408475   72867 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:26.413005   72867 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:26.413033   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:26.413107   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:26.413203   72867 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:26.413320   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:26.422811   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:26.449737   72867 start.go:296] duration metric: took 125.408167ms for postStartSetup
	I0906 20:04:26.449772   72867 fix.go:56] duration metric: took 19.779834553s for fixHost
	I0906 20:04:26.449792   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.452589   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.452990   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.453022   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.453323   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.453529   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453710   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.453966   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.454125   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.454136   72867 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:26.557844   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653066.531604649
	
	I0906 20:04:26.557875   72867 fix.go:216] guest clock: 1725653066.531604649
	I0906 20:04:26.557884   72867 fix.go:229] Guest: 2024-09-06 20:04:26.531604649 +0000 UTC Remote: 2024-09-06 20:04:26.449775454 +0000 UTC m=+269.281822801 (delta=81.829195ms)
	I0906 20:04:26.557904   72867 fix.go:200] guest clock delta is within tolerance: 81.829195ms
	I0906 20:04:26.557909   72867 start.go:83] releasing machines lock for "default-k8s-diff-port-653828", held for 19.888002519s
	I0906 20:04:26.557943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.558256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:26.561285   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561705   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.561732   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562425   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562628   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562732   72867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:26.562782   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.562920   72867 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:26.562950   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.565587   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.565970   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566149   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566331   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.566542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.566605   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566633   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566744   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.566756   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566992   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.567145   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.567302   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.672529   72867 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:26.678762   72867 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:26.825625   72867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:26.832290   72867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:26.832363   72867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:26.848802   72867 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:26.848824   72867 start.go:495] detecting cgroup driver to use...
	I0906 20:04:26.848917   72867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:26.864986   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:26.878760   72867 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:26.878813   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:26.893329   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:26.909090   72867 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:27.025534   72867 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:27.190190   72867 docker.go:233] disabling docker service ...
	I0906 20:04:27.190293   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:22.617468   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:24.618561   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.118448   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.204700   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:27.217880   72867 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:27.346599   72867 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:27.466601   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:27.480785   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:27.501461   72867 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:27.501523   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.511815   72867 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:27.511868   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.521806   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.532236   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.542227   72867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:27.552389   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.563462   72867 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.583365   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.594465   72867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:27.605074   72867 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:27.605140   72867 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:27.618702   72867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:27.630566   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:27.748387   72867 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:27.841568   72867 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:27.841652   72867 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:27.846880   72867 start.go:563] Will wait 60s for crictl version
	I0906 20:04:27.846936   72867 ssh_runner.go:195] Run: which crictl
	I0906 20:04:27.851177   72867 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:27.895225   72867 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:27.895327   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.934388   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.966933   72867 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:26.583194   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .Start
	I0906 20:04:26.583341   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring networks are active...
	I0906 20:04:26.584046   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network default is active
	I0906 20:04:26.584420   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network mk-old-k8s-version-843298 is active
	I0906 20:04:26.584851   73230 main.go:141] libmachine: (old-k8s-version-843298) Getting domain xml...
	I0906 20:04:26.585528   73230 main.go:141] libmachine: (old-k8s-version-843298) Creating domain...
	I0906 20:04:27.874281   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting to get IP...
	I0906 20:04:27.875189   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:27.875762   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:27.875844   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:27.875754   74166 retry.go:31] will retry after 289.364241ms: waiting for machine to come up
	I0906 20:04:28.166932   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.167349   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.167375   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.167303   74166 retry.go:31] will retry after 317.106382ms: waiting for machine to come up
	I0906 20:04:28.485664   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.486147   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.486241   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.486199   74166 retry.go:31] will retry after 401.712201ms: waiting for machine to come up
	I0906 20:04:28.890039   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.890594   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.890621   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.890540   74166 retry.go:31] will retry after 570.418407ms: waiting for machine to come up
	I0906 20:04:29.462983   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:29.463463   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:29.463489   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:29.463428   74166 retry.go:31] will retry after 696.361729ms: waiting for machine to come up
	I0906 20:04:30.161305   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:30.161829   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:30.161876   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:30.161793   74166 retry.go:31] will retry after 896.800385ms: waiting for machine to come up
	I0906 20:04:27.968123   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:27.971448   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.971880   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:27.971904   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.972128   72867 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:27.981160   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:27.994443   72867 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653
828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:27.994575   72867 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:27.994635   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:28.043203   72867 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:28.043285   72867 ssh_runner.go:195] Run: which lz4
	I0906 20:04:28.048798   72867 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:28.053544   72867 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:28.053577   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:29.490070   72867 crio.go:462] duration metric: took 1.441303819s to copy over tarball
	I0906 20:04:29.490142   72867 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:31.649831   72867 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159650072s)
	I0906 20:04:31.649870   72867 crio.go:469] duration metric: took 2.159772826s to extract the tarball
	I0906 20:04:31.649880   72867 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:31.686875   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:31.729557   72867 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:31.729580   72867 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:31.729587   72867 kubeadm.go:934] updating node { 192.168.50.16 8444 v1.31.0 crio true true} ...
	I0906 20:04:31.729698   72867 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:31.729799   72867 ssh_runner.go:195] Run: crio config
	I0906 20:04:31.777272   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:31.777299   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:31.777316   72867 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:31.777336   72867 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.16 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653828 NodeName:default-k8s-diff-port-653828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:31.777509   72867 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.16
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653828"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:31.777577   72867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:31.788008   72867 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:31.788070   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:31.798261   72867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0906 20:04:31.815589   72867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:31.832546   72867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0906 20:04:31.849489   72867 ssh_runner.go:195] Run: grep 192.168.50.16	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:31.853452   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:31.866273   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:31.984175   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:32.001110   72867 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828 for IP: 192.168.50.16
	I0906 20:04:32.001139   72867 certs.go:194] generating shared ca certs ...
	I0906 20:04:32.001160   72867 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:32.001343   72867 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:32.001399   72867 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:32.001413   72867 certs.go:256] generating profile certs ...
	I0906 20:04:32.001509   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/client.key
	I0906 20:04:32.001613   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key.01951d83
	I0906 20:04:32.001665   72867 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key
	I0906 20:04:32.001815   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:32.001866   72867 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:32.001880   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:32.001913   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:32.001933   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:32.001962   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:32.002001   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:32.002812   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:32.037177   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:32.078228   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:32.117445   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:32.153039   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 20:04:32.186458   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:28.120786   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:28.120826   72441 pod_ready.go:82] duration metric: took 7.509209061s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:28.120842   72441 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:30.129518   72441 pod_ready.go:103] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:31.059799   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.060272   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.060294   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.060226   74166 retry.go:31] will retry after 841.627974ms: waiting for machine to come up
	I0906 20:04:31.903823   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.904258   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.904280   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.904238   74166 retry.go:31] will retry after 1.274018797s: waiting for machine to come up
	I0906 20:04:33.179723   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:33.180090   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:33.180133   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:33.180059   74166 retry.go:31] will retry after 1.496142841s: waiting for machine to come up
	I0906 20:04:34.678209   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:34.678697   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:34.678726   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:34.678652   74166 retry.go:31] will retry after 1.795101089s: waiting for machine to come up
	I0906 20:04:32.216815   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:32.245378   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:32.272163   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:32.297017   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:32.321514   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:32.345724   72867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:32.362488   72867 ssh_runner.go:195] Run: openssl version
	I0906 20:04:32.368722   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:32.380099   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384777   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384834   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.392843   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:32.405716   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:32.417043   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422074   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422143   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.427946   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:32.439430   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:32.450466   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455056   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455114   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.460970   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:32.471978   72867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:32.476838   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:32.483008   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:32.489685   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:32.496446   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:32.502841   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:32.509269   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:32.515687   72867 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:32.515791   72867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:32.515853   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.567687   72867 cri.go:89] found id: ""
	I0906 20:04:32.567763   72867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:32.578534   72867 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:32.578552   72867 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:32.578598   72867 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:32.588700   72867 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:32.589697   72867 kubeconfig.go:125] found "default-k8s-diff-port-653828" server: "https://192.168.50.16:8444"
	I0906 20:04:32.591739   72867 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:32.601619   72867 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.16
	I0906 20:04:32.601649   72867 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:32.601659   72867 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:32.601724   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.640989   72867 cri.go:89] found id: ""
	I0906 20:04:32.641056   72867 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:32.659816   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:32.670238   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:32.670274   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:32.670327   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:04:32.679687   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:32.679778   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:32.689024   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:04:32.698403   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:32.698465   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:32.707806   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.717015   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:32.717105   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.726408   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:04:32.735461   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:32.735538   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:32.744701   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:32.754202   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:32.874616   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.759668   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.984693   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.051998   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.155274   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:34.155384   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:34.655749   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.156069   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.656120   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.672043   72867 api_server.go:72] duration metric: took 1.516769391s to wait for apiserver process to appear ...
	I0906 20:04:35.672076   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:35.672099   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:32.628208   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.628235   72441 pod_ready.go:82] duration metric: took 4.507383414s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.628248   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633941   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.633965   72441 pod_ready.go:82] duration metric: took 5.709738ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633975   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639227   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.639249   72441 pod_ready.go:82] duration metric: took 5.26842ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639259   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644664   72441 pod_ready.go:93] pod "kube-proxy-crvq7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.644690   72441 pod_ready.go:82] duration metric: took 5.423551ms for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644701   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650000   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.650022   72441 pod_ready.go:82] duration metric: took 5.312224ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650034   72441 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:34.657709   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:37.157744   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:38.092386   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.092429   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.092448   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.129071   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.129110   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.172277   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.213527   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.213573   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:38.673103   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.677672   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.677704   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.172237   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.179638   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:39.179670   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.672801   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.678523   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:04:39.688760   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:39.688793   72867 api_server.go:131] duration metric: took 4.016709147s to wait for apiserver health ...
	I0906 20:04:39.688804   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:39.688812   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:39.690721   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:36.474937   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:36.475399   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:36.475497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:36.475351   74166 retry.go:31] will retry after 1.918728827s: waiting for machine to come up
	I0906 20:04:38.397024   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:38.397588   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:38.397617   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:38.397534   74166 retry.go:31] will retry after 3.460427722s: waiting for machine to come up
	I0906 20:04:39.692055   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:39.707875   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:39.728797   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:39.740514   72867 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:39.740553   72867 system_pods.go:61] "coredns-6f6b679f8f-mvwth" [53675f76-d849-471c-9cd1-561e2f8e6499] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:39.740562   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [f69c9488-87d4-487e-902b-588182c2e2e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:39.740567   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [d641f983-776e-4102-81a3-ba3cf49911a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:39.740579   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [1b09e88d-b038-42d3-9c36-4eee1eff1c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:39.740585   72867 system_pods.go:61] "kube-proxy-9wlq4" [5254a977-ded3-439d-8db0-cd54ccd96940] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:39.740590   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [f8c16cf5-2c76-428f-83de-e79c49566683] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:39.740594   72867 system_pods.go:61] "metrics-server-6867b74b74-dds56" [6219eb1e-2904-487c-b4ed-d786a0627281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:39.740598   72867 system_pods.go:61] "storage-provisioner" [58dd82cd-e250-4f57-97ad-55408f001cc3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:39.740605   72867 system_pods.go:74] duration metric: took 11.784722ms to wait for pod list to return data ...
	I0906 20:04:39.740614   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:39.745883   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:39.745913   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:39.745923   72867 node_conditions.go:105] duration metric: took 5.304169ms to run NodePressure ...
	I0906 20:04:39.745945   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:40.031444   72867 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036537   72867 kubeadm.go:739] kubelet initialised
	I0906 20:04:40.036556   72867 kubeadm.go:740] duration metric: took 5.087185ms waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036563   72867 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:40.044926   72867 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:42.050947   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:39.657641   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:42.156327   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:41.860109   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:41.860612   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:41.860640   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:41.860560   74166 retry.go:31] will retry after 4.509018672s: waiting for machine to come up
	I0906 20:04:44.051148   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.554068   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:44.157427   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.656559   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:47.793833   72322 start.go:364] duration metric: took 56.674519436s to acquireMachinesLock for "no-preload-504385"
	I0906 20:04:47.793890   72322 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:47.793898   72322 fix.go:54] fixHost starting: 
	I0906 20:04:47.794329   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:47.794363   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:47.812048   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0906 20:04:47.812496   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:47.813081   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:04:47.813109   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:47.813446   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:47.813741   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:04:47.813945   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:04:47.815314   72322 fix.go:112] recreateIfNeeded on no-preload-504385: state=Stopped err=<nil>
	I0906 20:04:47.815338   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	W0906 20:04:47.815507   72322 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:47.817424   72322 out.go:177] * Restarting existing kvm2 VM for "no-preload-504385" ...
	I0906 20:04:47.818600   72322 main.go:141] libmachine: (no-preload-504385) Calling .Start
	I0906 20:04:47.818760   72322 main.go:141] libmachine: (no-preload-504385) Ensuring networks are active...
	I0906 20:04:47.819569   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network default is active
	I0906 20:04:47.819883   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network mk-no-preload-504385 is active
	I0906 20:04:47.820233   72322 main.go:141] libmachine: (no-preload-504385) Getting domain xml...
	I0906 20:04:47.821002   72322 main.go:141] libmachine: (no-preload-504385) Creating domain...
	I0906 20:04:46.374128   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374599   73230 main.go:141] libmachine: (old-k8s-version-843298) Found IP for machine: 192.168.72.30
	I0906 20:04:46.374629   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has current primary IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374642   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserving static IP address...
	I0906 20:04:46.375045   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.375071   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | skip adding static IP to network mk-old-k8s-version-843298 - found existing host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"}
	I0906 20:04:46.375081   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserved static IP address: 192.168.72.30
	I0906 20:04:46.375104   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting for SSH to be available...
	I0906 20:04:46.375119   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Getting to WaitForSSH function...
	I0906 20:04:46.377497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377836   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.377883   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377956   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH client type: external
	I0906 20:04:46.377982   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa (-rw-------)
	I0906 20:04:46.378028   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:46.378044   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | About to run SSH command:
	I0906 20:04:46.378054   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | exit 0
	I0906 20:04:46.505025   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:46.505386   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 20:04:46.506031   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.508401   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.508787   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.508827   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.509092   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:04:46.509321   73230 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:46.509339   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:46.509549   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.511816   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512230   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.512265   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512436   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.512618   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512794   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512932   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.513123   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.513364   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.513378   73230 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:46.629437   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:46.629469   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629712   73230 buildroot.go:166] provisioning hostname "old-k8s-version-843298"
	I0906 20:04:46.629731   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629910   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.632226   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632620   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.632653   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632817   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.633009   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633204   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633364   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.633544   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.633758   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.633779   73230 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-843298 && echo "old-k8s-version-843298" | sudo tee /etc/hostname
	I0906 20:04:46.764241   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-843298
	
	I0906 20:04:46.764271   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.766678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767063   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.767092   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767236   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.767414   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767591   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767740   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.767874   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.768069   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.768088   73230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-843298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-843298/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-843298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:46.890399   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:46.890424   73230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:46.890461   73230 buildroot.go:174] setting up certificates
	I0906 20:04:46.890471   73230 provision.go:84] configureAuth start
	I0906 20:04:46.890479   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.890714   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.893391   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893765   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.893802   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893942   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.896173   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896505   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.896524   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896688   73230 provision.go:143] copyHostCerts
	I0906 20:04:46.896741   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:46.896756   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:46.896814   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:46.896967   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:46.896977   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:46.897008   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:46.897096   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:46.897104   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:46.897133   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:46.897193   73230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-843298 san=[127.0.0.1 192.168.72.30 localhost minikube old-k8s-version-843298]
	I0906 20:04:47.128570   73230 provision.go:177] copyRemoteCerts
	I0906 20:04:47.128627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:47.128653   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.131548   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.131952   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.131981   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.132164   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.132396   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.132571   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.132705   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.223745   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:47.249671   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0906 20:04:47.274918   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:47.300351   73230 provision.go:87] duration metric: took 409.869395ms to configureAuth
	I0906 20:04:47.300376   73230 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:47.300584   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:04:47.300673   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.303255   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303559   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.303581   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303739   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.303943   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304098   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304266   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.304407   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.304623   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.304644   73230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:47.539793   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:47.539824   73230 machine.go:96] duration metric: took 1.030489839s to provisionDockerMachine
	I0906 20:04:47.539836   73230 start.go:293] postStartSetup for "old-k8s-version-843298" (driver="kvm2")
	I0906 20:04:47.539849   73230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:47.539884   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.540193   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:47.540220   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.543190   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543482   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.543506   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543707   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.543938   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.544097   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.544243   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.633100   73230 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:47.637336   73230 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:47.637368   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:47.637459   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:47.637541   73230 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:47.637627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:47.648442   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:47.672907   73230 start.go:296] duration metric: took 133.055727ms for postStartSetup
	I0906 20:04:47.672951   73230 fix.go:56] duration metric: took 21.114855209s for fixHost
	I0906 20:04:47.672978   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.675459   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.675833   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.675863   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.676005   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.676303   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676471   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676661   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.676846   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.677056   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.677070   73230 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:47.793647   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653087.750926682
	
	I0906 20:04:47.793671   73230 fix.go:216] guest clock: 1725653087.750926682
	I0906 20:04:47.793681   73230 fix.go:229] Guest: 2024-09-06 20:04:47.750926682 +0000 UTC Remote: 2024-09-06 20:04:47.67295613 +0000 UTC m=+232.250384025 (delta=77.970552ms)
	I0906 20:04:47.793735   73230 fix.go:200] guest clock delta is within tolerance: 77.970552ms
	I0906 20:04:47.793746   73230 start.go:83] releasing machines lock for "old-k8s-version-843298", held for 21.235682628s
	I0906 20:04:47.793778   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.794059   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:47.796792   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797195   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.797229   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797425   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798019   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798230   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798314   73230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:47.798360   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.798488   73230 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:47.798509   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.801253   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801632   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.801658   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801867   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802060   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802122   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.802152   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.802210   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802318   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802460   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802504   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.802580   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802722   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.886458   73230 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:47.910204   73230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:48.055661   73230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:48.063024   73230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:48.063090   73230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:48.084749   73230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:48.084771   73230 start.go:495] detecting cgroup driver to use...
	I0906 20:04:48.084892   73230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:48.105494   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:48.123487   73230 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:48.123564   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:48.145077   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:48.161336   73230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:48.283568   73230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:48.445075   73230 docker.go:233] disabling docker service ...
	I0906 20:04:48.445146   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:48.461122   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:48.475713   73230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:48.632804   73230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:48.762550   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:48.778737   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:48.798465   73230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:04:48.798549   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.811449   73230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:48.811523   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.824192   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.835598   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.847396   73230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:48.860005   73230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:48.871802   73230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:48.871864   73230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:48.887596   73230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:48.899508   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:49.041924   73230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:49.144785   73230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:49.144885   73230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:49.150404   73230 start.go:563] Will wait 60s for crictl version
	I0906 20:04:49.150461   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:49.154726   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:49.202450   73230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:49.202557   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.235790   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.270094   73230 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0906 20:04:49.271457   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:49.274710   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275114   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:49.275139   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275475   73230 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:49.280437   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:49.293664   73230 kubeadm.go:883] updating cluster {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:49.293793   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:04:49.293842   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:49.348172   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:49.348251   73230 ssh_runner.go:195] Run: which lz4
	I0906 20:04:49.352703   73230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:49.357463   73230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:49.357501   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0906 20:04:49.056116   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:51.553185   72867 pod_ready.go:93] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.553217   72867 pod_ready.go:82] duration metric: took 11.508264695s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.553231   72867 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563758   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.563788   72867 pod_ready.go:82] duration metric: took 10.547437ms for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563802   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570906   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.570940   72867 pod_ready.go:82] duration metric: took 7.128595ms for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570957   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:48.657527   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:50.662561   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:49.146755   72322 main.go:141] libmachine: (no-preload-504385) Waiting to get IP...
	I0906 20:04:49.147780   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.148331   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.148406   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.148309   74322 retry.go:31] will retry after 250.314453ms: waiting for machine to come up
	I0906 20:04:49.399920   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.400386   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.400468   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.400345   74322 retry.go:31] will retry after 247.263156ms: waiting for machine to come up
	I0906 20:04:49.648894   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.649420   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.649445   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.649376   74322 retry.go:31] will retry after 391.564663ms: waiting for machine to come up
	I0906 20:04:50.043107   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.043594   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.043617   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.043548   74322 retry.go:31] will retry after 513.924674ms: waiting for machine to come up
	I0906 20:04:50.559145   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.559637   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.559675   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.559543   74322 retry.go:31] will retry after 551.166456ms: waiting for machine to come up
	I0906 20:04:51.111906   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.112967   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.112999   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.112921   74322 retry.go:31] will retry after 653.982425ms: waiting for machine to come up
	I0906 20:04:51.768950   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.769466   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.769496   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.769419   74322 retry.go:31] will retry after 935.670438ms: waiting for machine to come up
	I0906 20:04:52.706493   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:52.707121   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:52.707152   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:52.707062   74322 retry.go:31] will retry after 1.141487289s: waiting for machine to come up
	I0906 20:04:51.190323   73230 crio.go:462] duration metric: took 1.837657617s to copy over tarball
	I0906 20:04:51.190410   73230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:54.320754   73230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.130319477s)
	I0906 20:04:54.320778   73230 crio.go:469] duration metric: took 3.130424981s to extract the tarball
	I0906 20:04:54.320785   73230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:54.388660   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:54.427475   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:54.427505   73230 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:04:54.427580   73230 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.427594   73230 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.427611   73230 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.427662   73230 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.427691   73230 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.427696   73230 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.427813   73230 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.427672   73230 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0906 20:04:54.429432   73230 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.429443   73230 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.429447   73230 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.429448   73230 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.429475   73230 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.429449   73230 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.429496   73230 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.429589   73230 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 20:04:54.603502   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.607745   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.610516   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.613580   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.616591   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.622381   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 20:04:54.636746   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.690207   73230 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0906 20:04:54.690254   73230 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.690306   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.788758   73230 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0906 20:04:54.788804   73230 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.788876   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.804173   73230 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0906 20:04:54.804228   73230 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.804273   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817005   73230 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0906 20:04:54.817056   73230 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.817074   73230 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0906 20:04:54.817101   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817122   73230 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.817138   73230 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 20:04:54.817167   73230 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 20:04:54.817202   73230 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0906 20:04:54.817213   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817220   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.817227   73230 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.817168   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817253   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817301   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.817333   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902264   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.902422   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902522   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.902569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.902602   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.902654   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:54.902708   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.061686   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.073933   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.085364   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:55.085463   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.085399   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.085610   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:55.085725   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.192872   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:55.196085   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.255204   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.288569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.291461   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0906 20:04:55.291541   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.291559   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0906 20:04:55.291726   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0906 20:04:53.578469   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.578504   72867 pod_ready.go:82] duration metric: took 2.007539423s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.578534   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583560   72867 pod_ready.go:93] pod "kube-proxy-9wlq4" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.583583   72867 pod_ready.go:82] duration metric: took 5.037068ms for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583594   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832422   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:54.832453   72867 pod_ready.go:82] duration metric: took 1.248849975s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832480   72867 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:56.840031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.156842   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:55.236051   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.849822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:53.850213   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:53.850235   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:53.850178   74322 retry.go:31] will retry after 1.858736556s: waiting for machine to come up
	I0906 20:04:55.710052   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:55.710550   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:55.710598   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:55.710496   74322 retry.go:31] will retry after 2.033556628s: waiting for machine to come up
	I0906 20:04:57.745989   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:57.746433   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:57.746459   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:57.746388   74322 retry.go:31] will retry after 1.985648261s: waiting for machine to come up
	I0906 20:04:55.500590   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0906 20:04:55.500702   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 20:04:55.500740   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0906 20:04:55.500824   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0906 20:04:55.500885   73230 cache_images.go:92] duration metric: took 1.07336017s to LoadCachedImages
	W0906 20:04:55.500953   73230 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0906 20:04:55.500969   73230 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.20.0 crio true true} ...
	I0906 20:04:55.501112   73230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-843298 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:55.501192   73230 ssh_runner.go:195] Run: crio config
	I0906 20:04:55.554097   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:04:55.554119   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:55.554135   73230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:55.554154   73230 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-843298 NodeName:old-k8s-version-843298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 20:04:55.554359   73230 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-843298"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:55.554441   73230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0906 20:04:55.565923   73230 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:55.566004   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:55.577366   73230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0906 20:04:55.595470   73230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:55.614641   73230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0906 20:04:55.637739   73230 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:55.642233   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:55.658409   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:55.804327   73230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:55.824288   73230 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298 for IP: 192.168.72.30
	I0906 20:04:55.824308   73230 certs.go:194] generating shared ca certs ...
	I0906 20:04:55.824323   73230 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:55.824479   73230 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:55.824541   73230 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:55.824560   73230 certs.go:256] generating profile certs ...
	I0906 20:04:55.824680   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key
	I0906 20:04:55.824755   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3
	I0906 20:04:55.824799   73230 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key
	I0906 20:04:55.824952   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:55.824995   73230 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:55.825008   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:55.825041   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:55.825072   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:55.825102   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:55.825158   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:55.825878   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:55.868796   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:55.905185   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:55.935398   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:55.973373   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 20:04:56.008496   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:04:56.046017   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:56.080049   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:56.122717   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:56.151287   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:56.184273   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:56.216780   73230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:56.239708   73230 ssh_runner.go:195] Run: openssl version
	I0906 20:04:56.246127   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:56.257597   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262515   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262594   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.269207   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:56.281646   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:56.293773   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299185   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299255   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.305740   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:56.319060   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:56.330840   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336013   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336082   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.342576   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:56.354648   73230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:56.359686   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:56.366321   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:56.372646   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:56.379199   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:56.386208   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:56.392519   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:56.399335   73230 kubeadm.go:392] StartCluster: {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:56.399442   73230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:56.399495   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.441986   73230 cri.go:89] found id: ""
	I0906 20:04:56.442069   73230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:56.454884   73230 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:56.454907   73230 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:56.454977   73230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:56.465647   73230 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:56.466650   73230 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:04:56.467285   73230 kubeconfig.go:62] /home/jenkins/minikube-integration/19576-6021/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-843298" cluster setting kubeconfig missing "old-k8s-version-843298" context setting]
	I0906 20:04:56.468248   73230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:56.565587   73230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:56.576221   73230 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.30
	I0906 20:04:56.576261   73230 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:56.576277   73230 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:56.576342   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.621597   73230 cri.go:89] found id: ""
	I0906 20:04:56.621663   73230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:56.639924   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:56.649964   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:56.649989   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:56.650042   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:56.661290   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:56.661343   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:56.671361   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:56.680865   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:56.680939   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:56.696230   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.706613   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:56.706692   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.719635   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:56.729992   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:56.730045   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:56.740040   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:56.750666   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:56.891897   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.681824   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.972206   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.091751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.206345   73230 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:58.206443   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:58.707412   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.206780   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.707273   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:00.207218   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.340092   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:01.838387   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:57.658033   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:00.157741   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:59.734045   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:59.734565   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:59.734592   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:59.734506   74322 retry.go:31] will retry after 2.767491398s: waiting for machine to come up
	I0906 20:05:02.505314   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:02.505749   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:05:02.505780   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:05:02.505697   74322 retry.go:31] will retry after 3.51382931s: waiting for machine to come up
	I0906 20:05:00.707010   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.206708   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.707125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.207349   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.706670   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.207287   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.706650   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.207125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.707193   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:05.207119   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.838639   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:05.839195   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:02.655906   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:04.656677   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:07.157732   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:06.023595   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024063   72322 main.go:141] libmachine: (no-preload-504385) Found IP for machine: 192.168.61.184
	I0906 20:05:06.024095   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has current primary IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024105   72322 main.go:141] libmachine: (no-preload-504385) Reserving static IP address...
	I0906 20:05:06.024576   72322 main.go:141] libmachine: (no-preload-504385) Reserved static IP address: 192.168.61.184
	I0906 20:05:06.024598   72322 main.go:141] libmachine: (no-preload-504385) Waiting for SSH to be available...
	I0906 20:05:06.024621   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.024643   72322 main.go:141] libmachine: (no-preload-504385) DBG | skip adding static IP to network mk-no-preload-504385 - found existing host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"}
	I0906 20:05:06.024666   72322 main.go:141] libmachine: (no-preload-504385) DBG | Getting to WaitForSSH function...
	I0906 20:05:06.026845   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027166   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.027219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027296   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH client type: external
	I0906 20:05:06.027321   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa (-rw-------)
	I0906 20:05:06.027355   72322 main.go:141] libmachine: (no-preload-504385) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:05:06.027376   72322 main.go:141] libmachine: (no-preload-504385) DBG | About to run SSH command:
	I0906 20:05:06.027403   72322 main.go:141] libmachine: (no-preload-504385) DBG | exit 0
	I0906 20:05:06.148816   72322 main.go:141] libmachine: (no-preload-504385) DBG | SSH cmd err, output: <nil>: 
	I0906 20:05:06.149196   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetConfigRaw
	I0906 20:05:06.149951   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.152588   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.152970   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.153003   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.153238   72322 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/config.json ...
	I0906 20:05:06.153485   72322 machine.go:93] provisionDockerMachine start ...
	I0906 20:05:06.153508   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:06.153714   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.156031   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156394   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.156425   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156562   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.156732   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.156901   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.157051   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.157205   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.157411   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.157425   72322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:05:06.261544   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:05:06.261586   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.261861   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:05:06.261895   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.262063   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.264812   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265192   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.265219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265400   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.265570   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265705   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265856   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.265990   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.266145   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.266157   72322 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-504385 && echo "no-preload-504385" | sudo tee /etc/hostname
	I0906 20:05:06.383428   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-504385
	
	I0906 20:05:06.383456   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.386368   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386722   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.386755   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386968   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.387152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387322   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387439   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.387617   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.387817   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.387840   72322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-504385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-504385/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-504385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:05:06.501805   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:05:06.501836   72322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:05:06.501854   72322 buildroot.go:174] setting up certificates
	I0906 20:05:06.501866   72322 provision.go:84] configureAuth start
	I0906 20:05:06.501873   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.502152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.504721   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505086   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.505115   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505250   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.507420   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507765   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.507795   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507940   72322 provision.go:143] copyHostCerts
	I0906 20:05:06.508008   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:05:06.508031   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:05:06.508087   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:05:06.508175   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:05:06.508183   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:05:06.508208   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:05:06.508297   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:05:06.508307   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:05:06.508338   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:05:06.508406   72322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.no-preload-504385 san=[127.0.0.1 192.168.61.184 localhost minikube no-preload-504385]
	I0906 20:05:06.681719   72322 provision.go:177] copyRemoteCerts
	I0906 20:05:06.681786   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:05:06.681810   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.684460   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684779   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.684822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684962   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.685125   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.685258   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.685368   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:06.767422   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:05:06.794881   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0906 20:05:06.821701   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:05:06.848044   72322 provision.go:87] duration metric: took 346.1664ms to configureAuth
	I0906 20:05:06.848075   72322 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:05:06.848271   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:05:06.848348   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.850743   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851037   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.851064   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851226   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.851395   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851549   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851674   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.851791   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.851993   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.852020   72322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:05:07.074619   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:05:07.074643   72322 machine.go:96] duration metric: took 921.143238ms to provisionDockerMachine
	I0906 20:05:07.074654   72322 start.go:293] postStartSetup for "no-preload-504385" (driver="kvm2")
	I0906 20:05:07.074664   72322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:05:07.074678   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.075017   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:05:07.075042   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.077988   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078268   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.078287   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078449   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.078634   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.078791   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.078946   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.165046   72322 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:05:07.169539   72322 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:05:07.169565   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:05:07.169631   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:05:07.169700   72322 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:05:07.169783   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:05:07.179344   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:07.204213   72322 start.go:296] duration metric: took 129.545341ms for postStartSetup
	I0906 20:05:07.204265   72322 fix.go:56] duration metric: took 19.41036755s for fixHost
	I0906 20:05:07.204287   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.207087   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207473   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.207513   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207695   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.207905   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208090   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208267   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.208436   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:07.208640   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:07.208655   72322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:05:07.314172   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653107.281354639
	
	I0906 20:05:07.314195   72322 fix.go:216] guest clock: 1725653107.281354639
	I0906 20:05:07.314205   72322 fix.go:229] Guest: 2024-09-06 20:05:07.281354639 +0000 UTC Remote: 2024-09-06 20:05:07.204269406 +0000 UTC m=+358.676673749 (delta=77.085233ms)
	I0906 20:05:07.314228   72322 fix.go:200] guest clock delta is within tolerance: 77.085233ms
	I0906 20:05:07.314237   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 19.52037381s
	I0906 20:05:07.314266   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.314552   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:07.317476   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.317839   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.317873   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.318003   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318542   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318716   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318821   72322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:05:07.318876   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.318991   72322 ssh_runner.go:195] Run: cat /version.json
	I0906 20:05:07.319018   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.321880   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322102   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322308   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322340   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322472   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322508   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322550   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322685   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.322713   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322868   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.322875   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.323062   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.323066   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.323221   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.424438   72322 ssh_runner.go:195] Run: systemctl --version
	I0906 20:05:07.430755   72322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:05:07.579436   72322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:05:07.585425   72322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:05:07.585493   72322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:05:07.601437   72322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:05:07.601462   72322 start.go:495] detecting cgroup driver to use...
	I0906 20:05:07.601529   72322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:05:07.620368   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:05:07.634848   72322 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:05:07.634912   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:05:07.648810   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:05:07.664084   72322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:05:07.796601   72322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:05:07.974836   72322 docker.go:233] disabling docker service ...
	I0906 20:05:07.974911   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:05:07.989013   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:05:08.002272   72322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:05:08.121115   72322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:05:08.247908   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:05:08.262855   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:05:08.281662   72322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:05:08.281730   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.292088   72322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:05:08.292165   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.302601   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.313143   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.323852   72322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:05:08.335791   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.347619   72322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.365940   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.376124   72322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:05:08.385677   72322 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:05:08.385743   72322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:05:08.398445   72322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:05:08.408477   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:08.518447   72322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:05:08.613636   72322 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:05:08.613707   72322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:05:08.619050   72322 start.go:563] Will wait 60s for crictl version
	I0906 20:05:08.619134   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:08.622959   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:05:08.668229   72322 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:05:08.668297   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.702416   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.733283   72322 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:05:05.707351   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.206573   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.707452   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.206554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.706854   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.206925   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.707456   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.207200   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.706741   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:10.206605   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.839381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.839918   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.157889   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:11.158761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:08.734700   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:08.737126   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737477   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:08.737504   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737692   72322 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0906 20:05:08.741940   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:08.756235   72322 kubeadm.go:883] updating cluster {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:05:08.756380   72322 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:05:08.756426   72322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:05:08.798359   72322 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:05:08.798388   72322 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:05:08.798484   72322 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.798507   72322 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.798520   72322 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0906 20:05:08.798559   72322 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.798512   72322 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.798571   72322 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.798494   72322 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.798489   72322 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800044   72322 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.800055   72322 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800048   72322 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0906 20:05:08.800067   72322 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.800070   72322 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.800043   72322 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.800046   72322 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.800050   72322 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.960723   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.967887   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.980496   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.988288   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.990844   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.000220   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.031002   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0906 20:05:09.046388   72322 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0906 20:05:09.046430   72322 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.046471   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.079069   72322 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0906 20:05:09.079112   72322 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.079161   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147423   72322 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0906 20:05:09.147470   72322 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.147521   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147529   72322 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0906 20:05:09.147549   72322 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.147584   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153575   72322 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0906 20:05:09.153612   72322 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.153659   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153662   72322 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0906 20:05:09.153697   72322 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.153736   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.272296   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.272317   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.272325   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.272368   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.272398   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.272474   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.397590   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.398793   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.398807   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.398899   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.398912   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.398969   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.515664   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.529550   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.529604   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.529762   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.532314   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.532385   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.603138   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.654698   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0906 20:05:09.654823   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:09.671020   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0906 20:05:09.671069   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0906 20:05:09.671123   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0906 20:05:09.671156   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:09.671128   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.671208   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:09.686883   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0906 20:05:09.687013   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:09.709594   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0906 20:05:09.709706   72322 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0906 20:05:09.709758   72322 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.709858   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0906 20:05:09.709877   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709868   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.709940   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0906 20:05:09.709906   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709994   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0906 20:05:09.709771   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0906 20:05:09.709973   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0906 20:05:09.709721   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:09.714755   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0906 20:05:12.389459   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.679458658s)
	I0906 20:05:12.389498   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0906 20:05:12.389522   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389524   72322 ssh_runner.go:235] Completed: which crictl: (2.679596804s)
	I0906 20:05:12.389573   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389582   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:10.706506   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.207411   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.707316   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.207239   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.706502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.206560   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.706593   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.207192   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.706940   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:15.207250   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.338753   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.339694   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.839193   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:13.656815   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.156988   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.349906   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.960304583s)
	I0906 20:05:14.349962   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960364149s)
	I0906 20:05:14.349988   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:14.350001   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0906 20:05:14.350032   72322 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.350085   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.397740   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:16.430883   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.03310928s)
	I0906 20:05:16.430943   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 20:05:16.430977   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080869318s)
	I0906 20:05:16.431004   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0906 20:05:16.431042   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:16.431042   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:16.431103   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:18.293255   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.862123731s)
	I0906 20:05:18.293274   72322 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.862211647s)
	I0906 20:05:18.293294   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0906 20:05:18.293315   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0906 20:05:18.293324   72322 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:18.293372   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:15.706728   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.207477   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.707337   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.206710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.707209   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.206544   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.707104   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.206752   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.706561   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:20.206507   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.840176   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.339033   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:18.657074   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.157488   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:19.142756   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 20:05:19.142784   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:19.142824   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:20.494611   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351756729s)
	I0906 20:05:20.494642   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0906 20:05:20.494656   72322 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.494706   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.706855   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.206585   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.706948   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.207150   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.706508   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.207459   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.706894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.206643   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.707208   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:25.206797   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.838561   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:25.838697   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:23.656303   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:26.156813   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:24.186953   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.692203906s)
	I0906 20:05:24.186987   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0906 20:05:24.187019   72322 cache_images.go:123] Successfully loaded all cached images
	I0906 20:05:24.187026   72322 cache_images.go:92] duration metric: took 15.388623154s to LoadCachedImages
	I0906 20:05:24.187040   72322 kubeadm.go:934] updating node { 192.168.61.184 8443 v1.31.0 crio true true} ...
	I0906 20:05:24.187169   72322 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-504385 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:05:24.187251   72322 ssh_runner.go:195] Run: crio config
	I0906 20:05:24.236699   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:24.236722   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:24.236746   72322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:05:24.236770   72322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.184 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-504385 NodeName:no-preload-504385 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:05:24.236943   72322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-504385"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:05:24.237005   72322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:05:24.247480   72322 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:05:24.247554   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:05:24.257088   72322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0906 20:05:24.274447   72322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:05:24.292414   72322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0906 20:05:24.310990   72322 ssh_runner.go:195] Run: grep 192.168.61.184	control-plane.minikube.internal$ /etc/hosts
	I0906 20:05:24.315481   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:24.327268   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:24.465318   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:05:24.482195   72322 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385 for IP: 192.168.61.184
	I0906 20:05:24.482216   72322 certs.go:194] generating shared ca certs ...
	I0906 20:05:24.482230   72322 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:05:24.482364   72322 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:05:24.482407   72322 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:05:24.482420   72322 certs.go:256] generating profile certs ...
	I0906 20:05:24.482522   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/client.key
	I0906 20:05:24.482603   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key.9c78613e
	I0906 20:05:24.482664   72322 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key
	I0906 20:05:24.482828   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:05:24.482878   72322 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:05:24.482894   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:05:24.482927   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:05:24.482956   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:05:24.482992   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:05:24.483043   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:24.483686   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:05:24.528742   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:05:24.561921   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:05:24.596162   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:05:24.636490   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 20:05:24.664450   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:05:24.690551   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:05:24.717308   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:05:24.741498   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:05:24.764388   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:05:24.789473   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:05:24.814772   72322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:05:24.833405   72322 ssh_runner.go:195] Run: openssl version
	I0906 20:05:24.841007   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:05:24.852635   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857351   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857404   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.863435   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:05:24.874059   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:05:24.884939   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889474   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889567   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.895161   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:05:24.905629   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:05:24.916101   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920494   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920550   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.925973   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:05:24.937017   72322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:05:24.941834   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:05:24.947779   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:05:24.954042   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:05:24.959977   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:05:24.965500   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:05:24.970996   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:05:24.976532   72322 kubeadm.go:392] StartCluster: {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:05:24.976606   72322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:05:24.976667   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.015556   72322 cri.go:89] found id: ""
	I0906 20:05:25.015653   72322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:05:25.032921   72322 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:05:25.032954   72322 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:05:25.033009   72322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:05:25.044039   72322 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:05:25.045560   72322 kubeconfig.go:125] found "no-preload-504385" server: "https://192.168.61.184:8443"
	I0906 20:05:25.049085   72322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:05:25.059027   72322 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.184
	I0906 20:05:25.059060   72322 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:05:25.059073   72322 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:05:25.059128   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.096382   72322 cri.go:89] found id: ""
	I0906 20:05:25.096446   72322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:05:25.114296   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:05:25.126150   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:05:25.126168   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:05:25.126207   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:05:25.136896   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:05:25.136964   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:05:25.148074   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:05:25.158968   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:05:25.159027   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:05:25.169642   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.179183   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:05:25.179258   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.189449   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:05:25.199237   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:05:25.199286   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:05:25.209663   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:05:25.220511   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:25.336312   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.475543   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.139195419s)
	I0906 20:05:26.475586   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.700018   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.768678   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.901831   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:05:26.901928   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.401987   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.903023   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.957637   72322 api_server.go:72] duration metric: took 1.055807s to wait for apiserver process to appear ...
	I0906 20:05:27.957664   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:05:27.957684   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:27.958196   72322 api_server.go:269] stopped: https://192.168.61.184:8443/healthz: Get "https://192.168.61.184:8443/healthz": dial tcp 192.168.61.184:8443: connect: connection refused
	I0906 20:05:28.458421   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:25.706669   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.206691   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.707336   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.206666   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.706715   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.206488   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.706489   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.207461   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.707293   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:30.206591   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.840001   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:29.840101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.768451   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:05:30.768482   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:05:30.768505   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.868390   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.868430   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:30.958611   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.964946   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.964977   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.458125   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.462130   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.462155   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.958761   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.963320   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.963347   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:32.458596   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:32.464885   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:05:32.474582   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:05:32.474616   72322 api_server.go:131] duration metric: took 4.51694462s to wait for apiserver health ...
	I0906 20:05:32.474627   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:32.474635   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:32.476583   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:05:28.157326   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.657628   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:32.477797   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:05:32.490715   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:05:32.510816   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:05:32.529192   72322 system_pods.go:59] 8 kube-system pods found
	I0906 20:05:32.529236   72322 system_pods.go:61] "coredns-6f6b679f8f-s7tnx" [ce438653-a3b9-4412-8705-7d2db7df5d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:05:32.529254   72322 system_pods.go:61] "etcd-no-preload-504385" [6ec6b2a1-c22a-44b4-b726-808a56f2be2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:05:32.529266   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [5f2baa0b-3cf3-4e0d-984b-80fa19adb3b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:05:32.529275   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [59ffbd51-6a83-43e6-8ef7-bc1cfd80b4d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:05:32.529292   72322 system_pods.go:61] "kube-proxy-dg8sg" [2e0393f3-b9bd-4603-b800-e1a2fdbf71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:05:32.529300   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [52a74c91-a6ec-4d64-8651-e1f87db21b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:05:32.529306   72322 system_pods.go:61] "metrics-server-6867b74b74-nn295" [9d0f51d1-7abf-4f63-bef7-c02f6cd89c5d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:05:32.529313   72322 system_pods.go:61] "storage-provisioner" [69ed0066-2b84-4a4d-91e5-1e25bb3f31eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:05:32.529320   72322 system_pods.go:74] duration metric: took 18.48107ms to wait for pod list to return data ...
	I0906 20:05:32.529333   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:05:32.535331   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:05:32.535363   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:05:32.535376   72322 node_conditions.go:105] duration metric: took 6.037772ms to run NodePressure ...
	I0906 20:05:32.535397   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:32.955327   72322 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962739   72322 kubeadm.go:739] kubelet initialised
	I0906 20:05:32.962767   72322 kubeadm.go:740] duration metric: took 7.415054ms waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962776   72322 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:05:32.980280   72322 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:30.707091   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.207070   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.707224   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.207295   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.707195   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.207373   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.707519   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.207428   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.706808   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:35.207396   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.340006   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.838636   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:36.838703   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:33.155769   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.156761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.994689   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.487610   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.707415   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.206955   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.706868   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.206515   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.706659   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.206735   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.706915   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.207300   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.707211   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:40.207085   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.839362   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:41.338875   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.657190   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.158940   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:39.986557   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.486518   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.706720   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.206896   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.707281   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.206751   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.706754   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.206987   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.707245   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.207502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.707112   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:45.206569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.339353   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.838975   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.657187   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.156196   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:47.157014   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:43.986675   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.986701   72322 pod_ready.go:82] duration metric: took 11.006397745s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.986710   72322 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991650   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.991671   72322 pod_ready.go:82] duration metric: took 4.955425ms for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991680   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997218   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:44.997242   72322 pod_ready.go:82] duration metric: took 1.005553613s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997253   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002155   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.002177   72322 pod_ready.go:82] duration metric: took 4.916677ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002186   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006610   72322 pod_ready.go:93] pod "kube-proxy-dg8sg" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.006631   72322 pod_ready.go:82] duration metric: took 4.439092ms for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006639   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185114   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.185139   72322 pod_ready.go:82] duration metric: took 178.494249ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185149   72322 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:47.191676   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.707450   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.207446   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.707006   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.206484   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.707168   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.207536   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.707554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.206894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.706709   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:50.206799   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.338355   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.839372   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.157301   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.157426   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.193619   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.692286   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.707012   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.206914   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.706917   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.207465   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.706682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.206565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.706757   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.206600   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.706926   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:55.207382   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.338845   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.339570   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:53.656904   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.158806   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:54.191331   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.192498   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.707103   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.206621   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.707156   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.207277   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.706568   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:58.206599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:05:58.206698   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:05:58.245828   73230 cri.go:89] found id: ""
	I0906 20:05:58.245857   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.245868   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:05:58.245875   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:05:58.245938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:05:58.283189   73230 cri.go:89] found id: ""
	I0906 20:05:58.283217   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.283228   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:05:58.283235   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:05:58.283303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:05:58.320834   73230 cri.go:89] found id: ""
	I0906 20:05:58.320868   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.320880   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:05:58.320889   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:05:58.320944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:05:58.356126   73230 cri.go:89] found id: ""
	I0906 20:05:58.356152   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.356162   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:05:58.356169   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:05:58.356227   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:05:58.395951   73230 cri.go:89] found id: ""
	I0906 20:05:58.395977   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.395987   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:05:58.395994   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:05:58.396061   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:05:58.431389   73230 cri.go:89] found id: ""
	I0906 20:05:58.431415   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.431426   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:05:58.431433   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:05:58.431511   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:05:58.466255   73230 cri.go:89] found id: ""
	I0906 20:05:58.466285   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.466294   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:05:58.466300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:05:58.466356   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:05:58.505963   73230 cri.go:89] found id: ""
	I0906 20:05:58.505989   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.505997   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:05:58.506006   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:05:58.506018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:05:58.579027   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:05:58.579061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:05:58.620332   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:05:58.620365   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:05:58.675017   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:05:58.675052   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:05:58.689944   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:05:58.689970   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:05:58.825396   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:05:57.838610   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.339329   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.656312   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.656996   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.691099   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.692040   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.192516   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:01.326375   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:01.340508   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:01.340570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:01.375429   73230 cri.go:89] found id: ""
	I0906 20:06:01.375460   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.375470   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:01.375478   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:01.375539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:01.410981   73230 cri.go:89] found id: ""
	I0906 20:06:01.411008   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.411019   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:01.411026   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:01.411083   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:01.448925   73230 cri.go:89] found id: ""
	I0906 20:06:01.448957   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.448968   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:01.448975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:01.449040   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:01.492063   73230 cri.go:89] found id: ""
	I0906 20:06:01.492094   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.492104   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:01.492112   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:01.492181   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:01.557779   73230 cri.go:89] found id: ""
	I0906 20:06:01.557812   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.557823   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:01.557830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:01.557892   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:01.604397   73230 cri.go:89] found id: ""
	I0906 20:06:01.604424   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.604432   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:01.604437   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:01.604482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:01.642249   73230 cri.go:89] found id: ""
	I0906 20:06:01.642280   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.642292   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:01.642300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:01.642364   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:01.692434   73230 cri.go:89] found id: ""
	I0906 20:06:01.692462   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.692474   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:01.692483   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:01.692498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:01.705860   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:01.705884   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:01.783929   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:01.783954   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:01.783965   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:01.864347   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:01.864385   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:01.902284   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:01.902311   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:04.456090   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:04.469775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:04.469840   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:04.505742   73230 cri.go:89] found id: ""
	I0906 20:06:04.505769   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.505778   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:04.505783   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:04.505835   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:04.541787   73230 cri.go:89] found id: ""
	I0906 20:06:04.541811   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.541819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:04.541824   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:04.541874   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:04.578775   73230 cri.go:89] found id: ""
	I0906 20:06:04.578806   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.578817   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:04.578825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:04.578885   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:04.614505   73230 cri.go:89] found id: ""
	I0906 20:06:04.614533   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.614542   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:04.614548   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:04.614594   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:04.652988   73230 cri.go:89] found id: ""
	I0906 20:06:04.653016   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.653027   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:04.653035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:04.653104   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:04.692380   73230 cri.go:89] found id: ""
	I0906 20:06:04.692408   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.692416   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:04.692423   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:04.692478   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:04.729846   73230 cri.go:89] found id: ""
	I0906 20:06:04.729869   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.729880   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:04.729887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:04.729953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:04.766341   73230 cri.go:89] found id: ""
	I0906 20:06:04.766370   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.766379   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:04.766390   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:04.766405   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:04.779801   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:04.779828   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:04.855313   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:04.855334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:04.855346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:04.934210   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:04.934246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:04.975589   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:04.975621   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:02.839427   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:04.840404   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.158048   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.655510   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.192558   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.692755   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.528622   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:07.544085   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:07.544156   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:07.588106   73230 cri.go:89] found id: ""
	I0906 20:06:07.588139   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.588149   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:07.588157   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:07.588210   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:07.630440   73230 cri.go:89] found id: ""
	I0906 20:06:07.630476   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.630494   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:07.630500   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:07.630551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:07.668826   73230 cri.go:89] found id: ""
	I0906 20:06:07.668870   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.668889   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:07.668898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:07.668962   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:07.706091   73230 cri.go:89] found id: ""
	I0906 20:06:07.706118   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.706130   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:07.706138   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:07.706196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:07.741679   73230 cri.go:89] found id: ""
	I0906 20:06:07.741708   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.741719   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:07.741726   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:07.741792   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:07.778240   73230 cri.go:89] found id: ""
	I0906 20:06:07.778277   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.778288   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:07.778296   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:07.778352   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:07.813183   73230 cri.go:89] found id: ""
	I0906 20:06:07.813212   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.813224   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:07.813232   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:07.813294   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:07.853938   73230 cri.go:89] found id: ""
	I0906 20:06:07.853970   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.853980   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:07.853988   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:07.854001   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:07.893540   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:07.893567   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:07.944219   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:07.944262   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:07.959601   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:07.959635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:08.034487   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:08.034513   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:08.034529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:07.339634   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:09.838953   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.658315   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.157980   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.192738   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.691823   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.611413   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:10.625273   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:10.625353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:10.664568   73230 cri.go:89] found id: ""
	I0906 20:06:10.664597   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.664609   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:10.664617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:10.664680   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:10.702743   73230 cri.go:89] found id: ""
	I0906 20:06:10.702772   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.702783   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:10.702790   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:10.702850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:10.739462   73230 cri.go:89] found id: ""
	I0906 20:06:10.739487   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.739504   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:10.739511   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:10.739572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:10.776316   73230 cri.go:89] found id: ""
	I0906 20:06:10.776344   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.776355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:10.776362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:10.776420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:10.809407   73230 cri.go:89] found id: ""
	I0906 20:06:10.809440   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.809451   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:10.809459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:10.809519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:10.844736   73230 cri.go:89] found id: ""
	I0906 20:06:10.844765   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.844777   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:10.844784   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:10.844851   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:10.880658   73230 cri.go:89] found id: ""
	I0906 20:06:10.880685   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.880693   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:10.880698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:10.880753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:10.917032   73230 cri.go:89] found id: ""
	I0906 20:06:10.917063   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.917074   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:10.917085   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:10.917100   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:10.980241   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:10.980272   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:10.995389   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:10.995435   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:11.070285   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:11.070313   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:11.070328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:11.155574   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:11.155607   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:13.703712   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:13.718035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:13.718093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:13.753578   73230 cri.go:89] found id: ""
	I0906 20:06:13.753603   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.753611   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:13.753617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:13.753659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:13.790652   73230 cri.go:89] found id: ""
	I0906 20:06:13.790681   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.790691   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:13.790697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:13.790749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:13.824243   73230 cri.go:89] found id: ""
	I0906 20:06:13.824278   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.824288   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:13.824293   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:13.824342   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:13.859647   73230 cri.go:89] found id: ""
	I0906 20:06:13.859691   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.859702   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:13.859721   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:13.859781   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:13.897026   73230 cri.go:89] found id: ""
	I0906 20:06:13.897061   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.897068   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:13.897075   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:13.897131   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:13.933904   73230 cri.go:89] found id: ""
	I0906 20:06:13.933927   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.933935   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:13.933941   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:13.933986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:13.969168   73230 cri.go:89] found id: ""
	I0906 20:06:13.969198   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.969210   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:13.969218   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:13.969295   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:14.005808   73230 cri.go:89] found id: ""
	I0906 20:06:14.005838   73230 logs.go:276] 0 containers: []
	W0906 20:06:14.005849   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:14.005862   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:14.005878   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:14.060878   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:14.060915   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:14.075388   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:14.075414   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:14.144942   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:14.144966   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:14.144981   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:14.233088   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:14.233139   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:12.338579   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.839062   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.655992   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.657020   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.157119   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.692103   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.193196   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:16.776744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:16.790292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:16.790384   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:16.828877   73230 cri.go:89] found id: ""
	I0906 20:06:16.828910   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.828921   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:16.828929   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:16.829016   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:16.864413   73230 cri.go:89] found id: ""
	I0906 20:06:16.864440   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.864449   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:16.864455   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:16.864525   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:16.908642   73230 cri.go:89] found id: ""
	I0906 20:06:16.908676   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.908687   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:16.908694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:16.908748   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:16.952247   73230 cri.go:89] found id: ""
	I0906 20:06:16.952278   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.952286   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:16.952292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:16.952343   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:16.990986   73230 cri.go:89] found id: ""
	I0906 20:06:16.991013   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.991022   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:16.991028   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:16.991077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:17.031002   73230 cri.go:89] found id: ""
	I0906 20:06:17.031034   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.031045   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:17.031052   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:17.031114   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:17.077533   73230 cri.go:89] found id: ""
	I0906 20:06:17.077560   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.077572   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:17.077579   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:17.077646   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:17.116770   73230 cri.go:89] found id: ""
	I0906 20:06:17.116798   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.116806   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:17.116817   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:17.116834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.169300   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:17.169337   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:17.184266   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:17.184299   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:17.266371   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:17.266400   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:17.266419   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:17.343669   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:17.343698   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:19.886541   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:19.899891   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:19.899951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:19.946592   73230 cri.go:89] found id: ""
	I0906 20:06:19.946621   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.946630   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:19.946636   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:19.946686   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:19.981758   73230 cri.go:89] found id: ""
	I0906 20:06:19.981788   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.981797   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:19.981802   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:19.981854   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:20.018372   73230 cri.go:89] found id: ""
	I0906 20:06:20.018397   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.018405   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:20.018411   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:20.018460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:20.054380   73230 cri.go:89] found id: ""
	I0906 20:06:20.054428   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.054440   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:20.054449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:20.054521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:20.092343   73230 cri.go:89] found id: ""
	I0906 20:06:20.092376   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.092387   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:20.092395   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:20.092463   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:20.128568   73230 cri.go:89] found id: ""
	I0906 20:06:20.128594   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.128604   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:20.128610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:20.128657   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:20.166018   73230 cri.go:89] found id: ""
	I0906 20:06:20.166046   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.166057   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:20.166072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:20.166125   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:20.203319   73230 cri.go:89] found id: ""
	I0906 20:06:20.203347   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.203355   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:20.203365   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:20.203381   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:20.287217   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:20.287243   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:20.287259   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:20.372799   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:20.372834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:20.416595   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:20.416620   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.338546   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.342409   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.838689   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.657411   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:22.157972   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.691327   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.692066   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:20.468340   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:20.468378   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:22.983259   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:22.997014   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:22.997098   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:23.034483   73230 cri.go:89] found id: ""
	I0906 20:06:23.034513   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.034524   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:23.034531   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:23.034597   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:23.072829   73230 cri.go:89] found id: ""
	I0906 20:06:23.072867   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.072878   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:23.072885   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:23.072949   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:23.110574   73230 cri.go:89] found id: ""
	I0906 20:06:23.110602   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.110613   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:23.110620   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:23.110684   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:23.149506   73230 cri.go:89] found id: ""
	I0906 20:06:23.149538   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.149550   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:23.149557   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:23.149619   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:23.191321   73230 cri.go:89] found id: ""
	I0906 20:06:23.191355   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.191367   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:23.191374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:23.191441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:23.233737   73230 cri.go:89] found id: ""
	I0906 20:06:23.233770   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.233791   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:23.233800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:23.233873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:23.270013   73230 cri.go:89] found id: ""
	I0906 20:06:23.270048   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.270060   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:23.270068   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:23.270127   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:23.309517   73230 cri.go:89] found id: ""
	I0906 20:06:23.309541   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.309549   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:23.309566   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:23.309578   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:23.380645   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:23.380675   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:23.380690   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:23.463656   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:23.463696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:23.504100   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:23.504134   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:23.557438   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:23.557483   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:23.841101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.340722   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.658261   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:27.155171   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.193829   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.690602   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.074045   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:26.088006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:26.088072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:26.124445   73230 cri.go:89] found id: ""
	I0906 20:06:26.124469   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.124476   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:26.124482   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:26.124537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:26.158931   73230 cri.go:89] found id: ""
	I0906 20:06:26.158957   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.158968   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:26.158975   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:26.159035   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:26.197125   73230 cri.go:89] found id: ""
	I0906 20:06:26.197154   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.197164   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:26.197171   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:26.197234   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:26.233241   73230 cri.go:89] found id: ""
	I0906 20:06:26.233278   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.233291   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:26.233300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:26.233366   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:26.269910   73230 cri.go:89] found id: ""
	I0906 20:06:26.269943   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.269955   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:26.269962   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:26.270026   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:26.308406   73230 cri.go:89] found id: ""
	I0906 20:06:26.308439   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.308450   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:26.308459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:26.308521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:26.344248   73230 cri.go:89] found id: ""
	I0906 20:06:26.344276   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.344288   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:26.344295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:26.344353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:26.391794   73230 cri.go:89] found id: ""
	I0906 20:06:26.391827   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.391840   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:26.391851   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:26.391866   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:26.444192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:26.444231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:26.459113   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:26.459144   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:26.533920   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:26.533945   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:26.533960   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:26.616382   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:26.616416   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:29.160429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:29.175007   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:29.175063   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:29.212929   73230 cri.go:89] found id: ""
	I0906 20:06:29.212961   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.212972   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:29.212980   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:29.213042   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:29.250777   73230 cri.go:89] found id: ""
	I0906 20:06:29.250806   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.250815   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:29.250821   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:29.250870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:29.292222   73230 cri.go:89] found id: ""
	I0906 20:06:29.292253   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.292262   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:29.292268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:29.292331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:29.328379   73230 cri.go:89] found id: ""
	I0906 20:06:29.328413   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.328431   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:29.328436   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:29.328482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:29.366792   73230 cri.go:89] found id: ""
	I0906 20:06:29.366822   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.366834   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:29.366841   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:29.366903   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:29.402233   73230 cri.go:89] found id: ""
	I0906 20:06:29.402261   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.402270   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:29.402276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:29.402331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:29.436695   73230 cri.go:89] found id: ""
	I0906 20:06:29.436724   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.436731   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:29.436736   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:29.436787   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:29.473050   73230 cri.go:89] found id: ""
	I0906 20:06:29.473074   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.473082   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:29.473091   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:29.473101   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:29.524981   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:29.525018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:29.538698   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:29.538722   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:29.611026   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:29.611049   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:29.611064   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:29.686898   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:29.686931   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:28.839118   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:30.839532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:29.156985   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.656552   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:28.694188   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.191032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.192623   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:32.228399   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:32.244709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:32.244775   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:32.285681   73230 cri.go:89] found id: ""
	I0906 20:06:32.285713   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.285724   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:32.285732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:32.285794   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:32.325312   73230 cri.go:89] found id: ""
	I0906 20:06:32.325340   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.325349   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:32.325355   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:32.325400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:32.361420   73230 cri.go:89] found id: ""
	I0906 20:06:32.361455   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.361468   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:32.361477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:32.361543   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:32.398881   73230 cri.go:89] found id: ""
	I0906 20:06:32.398956   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.398971   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:32.398984   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:32.399041   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:32.435336   73230 cri.go:89] found id: ""
	I0906 20:06:32.435362   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.435370   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:32.435375   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:32.435427   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:32.472849   73230 cri.go:89] found id: ""
	I0906 20:06:32.472900   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.472909   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:32.472914   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:32.472964   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:32.508176   73230 cri.go:89] found id: ""
	I0906 20:06:32.508199   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.508208   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:32.508213   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:32.508271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:32.550519   73230 cri.go:89] found id: ""
	I0906 20:06:32.550550   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.550561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:32.550576   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:32.550593   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:32.601362   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:32.601394   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:32.614821   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:32.614849   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:32.686044   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:32.686061   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:32.686074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:32.767706   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:32.767744   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:35.309159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:35.322386   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:35.322462   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:35.362909   73230 cri.go:89] found id: ""
	I0906 20:06:35.362937   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.362948   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:35.362955   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:35.363017   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:35.400591   73230 cri.go:89] found id: ""
	I0906 20:06:35.400621   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.400629   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:35.400635   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:35.400682   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:35.436547   73230 cri.go:89] found id: ""
	I0906 20:06:35.436578   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.436589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:35.436596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:35.436666   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:33.338812   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.340154   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.656782   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.657043   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.691312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:37.691358   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.473130   73230 cri.go:89] found id: ""
	I0906 20:06:35.473155   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.473163   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:35.473168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:35.473244   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:35.509646   73230 cri.go:89] found id: ""
	I0906 20:06:35.509677   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.509687   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:35.509695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:35.509754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:35.547651   73230 cri.go:89] found id: ""
	I0906 20:06:35.547684   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.547696   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:35.547703   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:35.547761   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:35.608590   73230 cri.go:89] found id: ""
	I0906 20:06:35.608614   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.608624   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:35.608631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:35.608691   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:35.651508   73230 cri.go:89] found id: ""
	I0906 20:06:35.651550   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.651561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:35.651572   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:35.651585   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:35.705502   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:35.705542   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:35.719550   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:35.719577   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:35.791435   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:35.791461   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:35.791476   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:35.869018   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:35.869070   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:38.411587   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:38.425739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:38.425800   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:38.463534   73230 cri.go:89] found id: ""
	I0906 20:06:38.463560   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.463571   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:38.463578   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:38.463628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:38.499238   73230 cri.go:89] found id: ""
	I0906 20:06:38.499269   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.499280   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:38.499287   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:38.499340   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:38.536297   73230 cri.go:89] found id: ""
	I0906 20:06:38.536334   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.536345   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:38.536352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:38.536417   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:38.573672   73230 cri.go:89] found id: ""
	I0906 20:06:38.573701   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.573712   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:38.573720   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:38.573779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:38.610913   73230 cri.go:89] found id: ""
	I0906 20:06:38.610937   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.610945   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:38.610950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:38.610996   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:38.647335   73230 cri.go:89] found id: ""
	I0906 20:06:38.647359   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.647368   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:38.647374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:38.647418   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:38.684054   73230 cri.go:89] found id: ""
	I0906 20:06:38.684084   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.684097   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:38.684106   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:38.684174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:38.731134   73230 cri.go:89] found id: ""
	I0906 20:06:38.731161   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.731173   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:38.731183   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:38.731199   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:38.787757   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:38.787798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:38.802920   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:38.802955   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:38.889219   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:38.889246   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:38.889261   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:38.964999   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:38.965042   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:37.838886   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.338914   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:38.156615   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.656577   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:39.691609   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.692330   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.504406   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:41.518111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:41.518169   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:41.558701   73230 cri.go:89] found id: ""
	I0906 20:06:41.558727   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.558738   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:41.558746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:41.558807   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:41.595986   73230 cri.go:89] found id: ""
	I0906 20:06:41.596009   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.596017   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:41.596023   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:41.596070   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:41.631462   73230 cri.go:89] found id: ""
	I0906 20:06:41.631486   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.631494   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:41.631504   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:41.631559   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:41.669646   73230 cri.go:89] found id: ""
	I0906 20:06:41.669674   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.669686   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:41.669693   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:41.669754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:41.708359   73230 cri.go:89] found id: ""
	I0906 20:06:41.708383   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.708391   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:41.708398   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:41.708446   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:41.745712   73230 cri.go:89] found id: ""
	I0906 20:06:41.745737   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.745750   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:41.745756   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:41.745804   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:41.781862   73230 cri.go:89] found id: ""
	I0906 20:06:41.781883   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.781892   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:41.781898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:41.781946   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:41.816687   73230 cri.go:89] found id: ""
	I0906 20:06:41.816714   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.816722   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:41.816730   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:41.816742   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:41.830115   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:41.830145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:41.908303   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:41.908334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:41.908348   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:42.001459   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:42.001501   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:42.061341   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:42.061368   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:44.619574   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:44.633355   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:44.633423   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:44.668802   73230 cri.go:89] found id: ""
	I0906 20:06:44.668834   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.668845   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:44.668852   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:44.668924   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:44.707613   73230 cri.go:89] found id: ""
	I0906 20:06:44.707639   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.707650   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:44.707657   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:44.707727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:44.744202   73230 cri.go:89] found id: ""
	I0906 20:06:44.744231   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.744243   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:44.744250   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:44.744311   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:44.783850   73230 cri.go:89] found id: ""
	I0906 20:06:44.783873   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.783881   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:44.783886   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:44.783938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:44.824986   73230 cri.go:89] found id: ""
	I0906 20:06:44.825011   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.825019   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:44.825025   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:44.825073   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:44.865157   73230 cri.go:89] found id: ""
	I0906 20:06:44.865182   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.865190   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:44.865196   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:44.865258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:44.908268   73230 cri.go:89] found id: ""
	I0906 20:06:44.908295   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.908305   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:44.908312   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:44.908359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:44.948669   73230 cri.go:89] found id: ""
	I0906 20:06:44.948697   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.948706   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:44.948716   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:44.948731   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:44.961862   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:44.961887   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:45.036756   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:45.036783   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:45.036801   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:45.116679   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:45.116717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:45.159756   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:45.159784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:42.339271   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.839443   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:43.155878   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:45.158884   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.192211   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:46.692140   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.714682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:47.730754   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:47.730820   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:47.783208   73230 cri.go:89] found id: ""
	I0906 20:06:47.783239   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.783249   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:47.783255   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:47.783312   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:47.844291   73230 cri.go:89] found id: ""
	I0906 20:06:47.844324   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.844336   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:47.844344   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:47.844407   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:47.881877   73230 cri.go:89] found id: ""
	I0906 20:06:47.881905   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.881913   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:47.881919   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:47.881986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:47.918034   73230 cri.go:89] found id: ""
	I0906 20:06:47.918058   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.918066   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:47.918072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:47.918126   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:47.957045   73230 cri.go:89] found id: ""
	I0906 20:06:47.957068   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.957077   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:47.957083   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:47.957134   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:47.993849   73230 cri.go:89] found id: ""
	I0906 20:06:47.993872   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.993883   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:47.993890   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:47.993951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:48.031214   73230 cri.go:89] found id: ""
	I0906 20:06:48.031239   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.031249   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:48.031257   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:48.031314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:48.064634   73230 cri.go:89] found id: ""
	I0906 20:06:48.064673   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.064690   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:48.064698   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:48.064710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:48.104307   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:48.104343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:48.158869   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:48.158900   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:48.173000   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:48.173026   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:48.248751   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:48.248774   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:48.248792   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:47.339014   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.339656   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.838817   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.656402   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.156349   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:52.156651   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.192411   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.691635   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.833490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:50.847618   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:50.847702   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:50.887141   73230 cri.go:89] found id: ""
	I0906 20:06:50.887167   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.887176   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:50.887181   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:50.887228   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:50.923435   73230 cri.go:89] found id: ""
	I0906 20:06:50.923480   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.923491   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:50.923499   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:50.923567   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:50.959704   73230 cri.go:89] found id: ""
	I0906 20:06:50.959730   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.959742   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:50.959748   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:50.959810   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:50.992994   73230 cri.go:89] found id: ""
	I0906 20:06:50.993023   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.993032   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:50.993037   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:50.993091   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:51.031297   73230 cri.go:89] found id: ""
	I0906 20:06:51.031321   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.031329   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:51.031335   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:51.031390   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:51.067698   73230 cri.go:89] found id: ""
	I0906 20:06:51.067721   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.067732   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:51.067739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:51.067799   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:51.102240   73230 cri.go:89] found id: ""
	I0906 20:06:51.102268   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.102278   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:51.102285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:51.102346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:51.137146   73230 cri.go:89] found id: ""
	I0906 20:06:51.137172   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.137183   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:51.137194   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:51.137209   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:51.216158   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:51.216194   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:51.256063   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:51.256088   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:51.309176   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:51.309210   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:51.323515   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:51.323544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:51.393281   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:53.893714   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:53.907807   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:53.907863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:53.947929   73230 cri.go:89] found id: ""
	I0906 20:06:53.947954   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.947962   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:53.947968   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:53.948014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:53.983005   73230 cri.go:89] found id: ""
	I0906 20:06:53.983028   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.983041   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:53.983046   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:53.983094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:54.019004   73230 cri.go:89] found id: ""
	I0906 20:06:54.019027   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.019035   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:54.019041   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:54.019094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:54.060240   73230 cri.go:89] found id: ""
	I0906 20:06:54.060266   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.060279   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:54.060285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:54.060336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:54.096432   73230 cri.go:89] found id: ""
	I0906 20:06:54.096461   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.096469   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:54.096475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:54.096537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:54.132992   73230 cri.go:89] found id: ""
	I0906 20:06:54.133021   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.133033   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:54.133040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:54.133103   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:54.172730   73230 cri.go:89] found id: ""
	I0906 20:06:54.172754   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.172766   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:54.172778   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:54.172839   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:54.212050   73230 cri.go:89] found id: ""
	I0906 20:06:54.212191   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.212202   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:54.212212   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:54.212234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:54.263603   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:54.263647   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:54.281291   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:54.281324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:54.359523   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:54.359545   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:54.359568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:54.442230   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:54.442265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:54.339159   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.841459   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.157379   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.656134   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.191878   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.691766   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.983744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:56.997451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:56.997527   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:57.034792   73230 cri.go:89] found id: ""
	I0906 20:06:57.034817   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.034825   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:57.034831   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:57.034883   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:57.073709   73230 cri.go:89] found id: ""
	I0906 20:06:57.073735   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.073745   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:57.073751   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:57.073803   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:57.122758   73230 cri.go:89] found id: ""
	I0906 20:06:57.122787   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.122798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:57.122808   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:57.122865   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:57.158208   73230 cri.go:89] found id: ""
	I0906 20:06:57.158242   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.158252   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:57.158262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:57.158323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:57.194004   73230 cri.go:89] found id: ""
	I0906 20:06:57.194029   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.194037   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:57.194044   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:57.194099   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:57.230068   73230 cri.go:89] found id: ""
	I0906 20:06:57.230099   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.230111   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:57.230119   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:57.230186   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:57.265679   73230 cri.go:89] found id: ""
	I0906 20:06:57.265707   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.265718   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:57.265735   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:57.265801   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:57.304917   73230 cri.go:89] found id: ""
	I0906 20:06:57.304946   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.304956   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:57.304967   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:57.304980   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:57.357238   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:57.357276   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:57.371648   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:57.371674   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:57.438572   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:57.438590   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:57.438602   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:57.528212   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:57.528256   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:00.071140   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:00.084975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:00.085055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:00.119680   73230 cri.go:89] found id: ""
	I0906 20:07:00.119713   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.119725   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:00.119732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:00.119786   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:00.155678   73230 cri.go:89] found id: ""
	I0906 20:07:00.155704   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.155716   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:00.155723   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:00.155769   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:00.190758   73230 cri.go:89] found id: ""
	I0906 20:07:00.190783   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.190793   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:00.190799   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:00.190863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:00.228968   73230 cri.go:89] found id: ""
	I0906 20:07:00.228999   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.229010   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:00.229018   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:00.229079   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:00.265691   73230 cri.go:89] found id: ""
	I0906 20:07:00.265722   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.265733   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:00.265741   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:00.265806   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:00.305785   73230 cri.go:89] found id: ""
	I0906 20:07:00.305812   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.305820   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:00.305825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:00.305872   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:00.341872   73230 cri.go:89] found id: ""
	I0906 20:07:00.341895   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.341902   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:00.341907   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:00.341955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:00.377661   73230 cri.go:89] found id: ""
	I0906 20:07:00.377690   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.377702   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:00.377712   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:00.377725   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:00.428215   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:00.428254   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:00.443135   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:00.443165   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 20:06:59.337996   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.338924   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:58.657236   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.156973   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:59.191556   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.192082   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.193511   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	W0906 20:07:00.518745   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:00.518768   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:00.518781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:00.604413   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:00.604448   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.146657   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:03.160610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:03.160665   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:03.200916   73230 cri.go:89] found id: ""
	I0906 20:07:03.200950   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.200960   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:03.200967   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:03.201029   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:03.239550   73230 cri.go:89] found id: ""
	I0906 20:07:03.239579   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.239592   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:03.239600   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:03.239660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:03.278216   73230 cri.go:89] found id: ""
	I0906 20:07:03.278244   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.278255   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:03.278263   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:03.278325   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:03.315028   73230 cri.go:89] found id: ""
	I0906 20:07:03.315059   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.315073   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:03.315080   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:03.315146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:03.354614   73230 cri.go:89] found id: ""
	I0906 20:07:03.354638   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.354647   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:03.354652   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:03.354710   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:03.390105   73230 cri.go:89] found id: ""
	I0906 20:07:03.390129   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.390138   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:03.390144   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:03.390190   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:03.427651   73230 cri.go:89] found id: ""
	I0906 20:07:03.427679   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.427687   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:03.427695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:03.427763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:03.463191   73230 cri.go:89] found id: ""
	I0906 20:07:03.463220   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.463230   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:03.463242   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:03.463288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:03.476966   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:03.476995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:03.558415   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:03.558441   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:03.558457   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:03.641528   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:03.641564   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.680916   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:03.680943   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:03.339511   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.340113   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.157907   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.160507   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.692151   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:08.191782   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:06.235947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:06.249589   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:06.249667   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:06.289193   73230 cri.go:89] found id: ""
	I0906 20:07:06.289223   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.289235   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:06.289242   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:06.289305   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:06.324847   73230 cri.go:89] found id: ""
	I0906 20:07:06.324887   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.324898   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:06.324904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:06.324966   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:06.361755   73230 cri.go:89] found id: ""
	I0906 20:07:06.361786   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.361798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:06.361806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:06.361873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:06.397739   73230 cri.go:89] found id: ""
	I0906 20:07:06.397766   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.397775   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:06.397780   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:06.397833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:06.432614   73230 cri.go:89] found id: ""
	I0906 20:07:06.432641   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.432649   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:06.432655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:06.432703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:06.467784   73230 cri.go:89] found id: ""
	I0906 20:07:06.467812   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.467823   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:06.467830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:06.467890   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:06.507055   73230 cri.go:89] found id: ""
	I0906 20:07:06.507085   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.507096   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:06.507104   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:06.507165   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:06.544688   73230 cri.go:89] found id: ""
	I0906 20:07:06.544720   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.544730   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:06.544740   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:06.544751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:06.597281   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:06.597314   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:06.612749   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:06.612774   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:06.684973   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:06.684993   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:06.685006   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:06.764306   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:06.764345   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.304340   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:09.317460   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:09.317536   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:09.354289   73230 cri.go:89] found id: ""
	I0906 20:07:09.354312   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.354322   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:09.354327   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:09.354373   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:09.390962   73230 cri.go:89] found id: ""
	I0906 20:07:09.390997   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.391008   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:09.391015   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:09.391076   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:09.427456   73230 cri.go:89] found id: ""
	I0906 20:07:09.427491   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.427502   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:09.427510   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:09.427572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:09.462635   73230 cri.go:89] found id: ""
	I0906 20:07:09.462667   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.462680   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:09.462687   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:09.462749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:09.506726   73230 cri.go:89] found id: ""
	I0906 20:07:09.506751   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.506767   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:09.506775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:09.506836   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:09.541974   73230 cri.go:89] found id: ""
	I0906 20:07:09.541999   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.542009   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:09.542017   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:09.542077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:09.580069   73230 cri.go:89] found id: ""
	I0906 20:07:09.580104   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.580115   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:09.580123   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:09.580182   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:09.616025   73230 cri.go:89] found id: ""
	I0906 20:07:09.616054   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.616065   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:09.616075   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:09.616090   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:09.630967   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:09.630993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:09.716733   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:09.716766   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:09.716782   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:09.792471   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:09.792503   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.832326   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:09.832357   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:07.840909   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.339239   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:07.655710   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:09.656069   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:11.656458   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.192155   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.192716   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.385565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:12.398694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:12.398768   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:12.437446   73230 cri.go:89] found id: ""
	I0906 20:07:12.437473   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.437482   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:12.437487   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:12.437555   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:12.473328   73230 cri.go:89] found id: ""
	I0906 20:07:12.473355   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.473362   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:12.473372   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:12.473429   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:12.510935   73230 cri.go:89] found id: ""
	I0906 20:07:12.510962   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.510972   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:12.510979   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:12.511044   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:12.547961   73230 cri.go:89] found id: ""
	I0906 20:07:12.547991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.547999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:12.548005   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:12.548062   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:12.585257   73230 cri.go:89] found id: ""
	I0906 20:07:12.585291   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.585302   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:12.585309   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:12.585369   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:12.623959   73230 cri.go:89] found id: ""
	I0906 20:07:12.623991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.624003   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:12.624010   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:12.624066   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:12.662795   73230 cri.go:89] found id: ""
	I0906 20:07:12.662822   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.662832   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:12.662840   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:12.662896   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:12.700941   73230 cri.go:89] found id: ""
	I0906 20:07:12.700967   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.700974   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:12.700983   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:12.700994   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:12.785989   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:12.786025   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:12.826678   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:12.826704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:12.881558   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:12.881599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:12.896035   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:12.896065   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:12.970721   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:12.839031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.339615   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:13.656809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.657470   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:14.691032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:16.692697   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.471171   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:15.484466   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:15.484541   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:15.518848   73230 cri.go:89] found id: ""
	I0906 20:07:15.518875   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.518886   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:15.518894   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:15.518953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:15.553444   73230 cri.go:89] found id: ""
	I0906 20:07:15.553468   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.553476   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:15.553482   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:15.553528   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:15.589136   73230 cri.go:89] found id: ""
	I0906 20:07:15.589160   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.589168   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:15.589173   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:15.589220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:15.624410   73230 cri.go:89] found id: ""
	I0906 20:07:15.624434   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.624443   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:15.624449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:15.624492   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:15.661506   73230 cri.go:89] found id: ""
	I0906 20:07:15.661535   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.661547   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:15.661555   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:15.661615   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:15.699126   73230 cri.go:89] found id: ""
	I0906 20:07:15.699148   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.699155   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:15.699161   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:15.699207   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:15.736489   73230 cri.go:89] found id: ""
	I0906 20:07:15.736523   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.736534   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:15.736542   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:15.736604   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:15.771988   73230 cri.go:89] found id: ""
	I0906 20:07:15.772013   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.772020   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:15.772029   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:15.772045   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:15.822734   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:15.822765   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:15.836820   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:15.836872   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:15.915073   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:15.915111   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:15.915126   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:15.988476   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:15.988514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:18.528710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:18.541450   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:18.541526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:18.581278   73230 cri.go:89] found id: ""
	I0906 20:07:18.581308   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.581317   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:18.581323   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:18.581381   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:18.616819   73230 cri.go:89] found id: ""
	I0906 20:07:18.616843   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.616850   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:18.616871   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:18.616923   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:18.655802   73230 cri.go:89] found id: ""
	I0906 20:07:18.655827   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.655842   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:18.655849   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:18.655908   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:18.693655   73230 cri.go:89] found id: ""
	I0906 20:07:18.693679   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.693689   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:18.693696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:18.693779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:18.730882   73230 cri.go:89] found id: ""
	I0906 20:07:18.730914   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.730924   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:18.730931   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:18.730994   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:18.767219   73230 cri.go:89] found id: ""
	I0906 20:07:18.767243   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.767250   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:18.767256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:18.767316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:18.802207   73230 cri.go:89] found id: ""
	I0906 20:07:18.802230   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.802238   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:18.802243   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:18.802300   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:18.840449   73230 cri.go:89] found id: ""
	I0906 20:07:18.840471   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.840481   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:18.840491   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:18.840504   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:18.892430   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:18.892469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:18.906527   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:18.906561   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:18.980462   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:18.980483   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:18.980494   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:19.059550   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:19.059588   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:17.340292   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:19.840090   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.156486   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:20.657764   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.693021   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.191529   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.191865   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.599879   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:21.614131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:21.614205   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:21.650887   73230 cri.go:89] found id: ""
	I0906 20:07:21.650910   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.650919   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:21.650924   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:21.650978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:21.684781   73230 cri.go:89] found id: ""
	I0906 20:07:21.684809   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.684819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:21.684827   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:21.684907   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:21.722685   73230 cri.go:89] found id: ""
	I0906 20:07:21.722711   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.722722   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:21.722729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:21.722791   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:21.757581   73230 cri.go:89] found id: ""
	I0906 20:07:21.757607   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.757616   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:21.757622   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:21.757670   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:21.791984   73230 cri.go:89] found id: ""
	I0906 20:07:21.792008   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.792016   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:21.792022   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:21.792072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:21.853612   73230 cri.go:89] found id: ""
	I0906 20:07:21.853636   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.853644   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:21.853650   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:21.853699   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:21.894184   73230 cri.go:89] found id: ""
	I0906 20:07:21.894232   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.894247   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:21.894256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:21.894318   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:21.930731   73230 cri.go:89] found id: ""
	I0906 20:07:21.930758   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.930768   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:21.930779   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:21.930798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:21.969174   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:21.969207   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:22.017647   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:22.017680   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:22.033810   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:22.033852   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:22.111503   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:22.111530   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:22.111544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:24.696348   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:24.710428   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:24.710506   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:24.747923   73230 cri.go:89] found id: ""
	I0906 20:07:24.747958   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.747969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:24.747977   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:24.748037   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:24.782216   73230 cri.go:89] found id: ""
	I0906 20:07:24.782250   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.782260   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:24.782268   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:24.782329   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:24.822093   73230 cri.go:89] found id: ""
	I0906 20:07:24.822126   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.822137   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:24.822148   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:24.822217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:24.857166   73230 cri.go:89] found id: ""
	I0906 20:07:24.857202   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.857213   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:24.857224   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:24.857314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:24.892575   73230 cri.go:89] found id: ""
	I0906 20:07:24.892610   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.892621   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:24.892629   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:24.892689   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:24.929102   73230 cri.go:89] found id: ""
	I0906 20:07:24.929130   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.929140   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:24.929149   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:24.929206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:24.964224   73230 cri.go:89] found id: ""
	I0906 20:07:24.964257   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.964268   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:24.964276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:24.964337   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:25.000453   73230 cri.go:89] found id: ""
	I0906 20:07:25.000475   73230 logs.go:276] 0 containers: []
	W0906 20:07:25.000485   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:25.000496   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:25.000511   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:25.041824   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:25.041851   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:25.093657   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:25.093692   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:25.107547   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:25.107576   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:25.178732   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:25.178755   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:25.178771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:22.338864   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:24.339432   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:26.838165   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.156449   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.156979   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.158086   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.192653   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.693480   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.764271   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:27.777315   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:27.777389   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:27.812621   73230 cri.go:89] found id: ""
	I0906 20:07:27.812644   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.812655   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:27.812663   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:27.812718   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:27.853063   73230 cri.go:89] found id: ""
	I0906 20:07:27.853093   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.853104   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:27.853112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:27.853171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:27.894090   73230 cri.go:89] found id: ""
	I0906 20:07:27.894118   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.894130   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:27.894137   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:27.894196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:27.930764   73230 cri.go:89] found id: ""
	I0906 20:07:27.930791   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.930802   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:27.930809   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:27.930870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:27.967011   73230 cri.go:89] found id: ""
	I0906 20:07:27.967036   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.967047   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:27.967053   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:27.967111   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:28.002119   73230 cri.go:89] found id: ""
	I0906 20:07:28.002146   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.002157   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:28.002164   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:28.002226   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:28.043884   73230 cri.go:89] found id: ""
	I0906 20:07:28.043909   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.043917   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:28.043923   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:28.043979   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:28.081510   73230 cri.go:89] found id: ""
	I0906 20:07:28.081538   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.081547   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:28.081557   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:28.081568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:28.159077   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:28.159109   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:28.207489   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:28.207527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:28.267579   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:28.267613   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:28.287496   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:28.287529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:28.376555   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:28.838301   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.843091   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:29.655598   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:31.657757   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.192112   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:32.692354   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.876683   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:30.890344   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:30.890424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:30.930618   73230 cri.go:89] found id: ""
	I0906 20:07:30.930647   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.930658   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:30.930666   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:30.930727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:30.968801   73230 cri.go:89] found id: ""
	I0906 20:07:30.968825   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.968834   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:30.968839   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:30.968911   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:31.006437   73230 cri.go:89] found id: ""
	I0906 20:07:31.006463   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.006472   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:31.006477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:31.006531   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:31.042091   73230 cri.go:89] found id: ""
	I0906 20:07:31.042117   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.042125   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:31.042131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:31.042177   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:31.079244   73230 cri.go:89] found id: ""
	I0906 20:07:31.079271   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.079280   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:31.079286   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:31.079336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:31.116150   73230 cri.go:89] found id: ""
	I0906 20:07:31.116174   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.116182   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:31.116188   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:31.116240   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:31.151853   73230 cri.go:89] found id: ""
	I0906 20:07:31.151877   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.151886   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:31.151892   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:31.151939   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:31.189151   73230 cri.go:89] found id: ""
	I0906 20:07:31.189181   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.189192   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:31.189203   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:31.189218   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:31.234466   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:31.234493   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:31.286254   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:31.286288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:31.300500   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:31.300525   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:31.372968   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:31.372987   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:31.372997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:33.949865   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:33.964791   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:33.964849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:34.027049   73230 cri.go:89] found id: ""
	I0906 20:07:34.027082   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.027094   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:34.027102   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:34.027162   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:34.080188   73230 cri.go:89] found id: ""
	I0906 20:07:34.080218   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.080230   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:34.080237   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:34.080320   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:34.124146   73230 cri.go:89] found id: ""
	I0906 20:07:34.124171   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.124179   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:34.124185   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:34.124230   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:34.161842   73230 cri.go:89] found id: ""
	I0906 20:07:34.161864   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.161872   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:34.161878   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:34.161938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:34.201923   73230 cri.go:89] found id: ""
	I0906 20:07:34.201951   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.201961   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:34.201967   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:34.202032   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:34.246609   73230 cri.go:89] found id: ""
	I0906 20:07:34.246644   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.246656   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:34.246665   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:34.246739   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:34.287616   73230 cri.go:89] found id: ""
	I0906 20:07:34.287646   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.287657   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:34.287663   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:34.287721   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:34.322270   73230 cri.go:89] found id: ""
	I0906 20:07:34.322297   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.322309   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:34.322320   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:34.322334   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:34.378598   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:34.378633   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:34.392748   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:34.392781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:34.468620   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:34.468648   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:34.468663   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:34.548290   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:34.548324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:33.339665   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.339890   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:34.157895   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:36.656829   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.192386   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.192574   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.095962   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:37.110374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:37.110459   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:37.146705   73230 cri.go:89] found id: ""
	I0906 20:07:37.146732   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.146740   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:37.146746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:37.146802   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:37.185421   73230 cri.go:89] found id: ""
	I0906 20:07:37.185449   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.185461   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:37.185468   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:37.185532   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:37.224767   73230 cri.go:89] found id: ""
	I0906 20:07:37.224793   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.224801   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:37.224806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:37.224884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:37.265392   73230 cri.go:89] found id: ""
	I0906 20:07:37.265422   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.265432   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:37.265438   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:37.265496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:37.302065   73230 cri.go:89] found id: ""
	I0906 20:07:37.302093   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.302101   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:37.302107   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:37.302171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:37.341466   73230 cri.go:89] found id: ""
	I0906 20:07:37.341493   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.341505   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:37.341513   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:37.341576   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.377701   73230 cri.go:89] found id: ""
	I0906 20:07:37.377724   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.377732   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:37.377738   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:37.377798   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:37.412927   73230 cri.go:89] found id: ""
	I0906 20:07:37.412955   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.412966   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:37.412977   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:37.412993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:37.427750   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:37.427776   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:37.500904   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:37.500928   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:37.500945   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:37.583204   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:37.583246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:37.623477   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:37.623512   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.179798   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:40.194295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:40.194372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:40.229731   73230 cri.go:89] found id: ""
	I0906 20:07:40.229768   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.229779   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:40.229787   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:40.229848   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:40.275909   73230 cri.go:89] found id: ""
	I0906 20:07:40.275943   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.275956   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:40.275964   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:40.276049   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:40.316552   73230 cri.go:89] found id: ""
	I0906 20:07:40.316585   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.316594   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:40.316599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:40.316647   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:40.355986   73230 cri.go:89] found id: ""
	I0906 20:07:40.356017   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.356028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:40.356036   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:40.356095   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:40.396486   73230 cri.go:89] found id: ""
	I0906 20:07:40.396522   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.396535   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:40.396544   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:40.396609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:40.440311   73230 cri.go:89] found id: ""
	I0906 20:07:40.440338   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.440346   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:40.440352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:40.440414   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.346532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.839521   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.156737   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.156967   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.691703   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.691972   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:40.476753   73230 cri.go:89] found id: ""
	I0906 20:07:40.476781   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.476790   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:40.476797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:40.476844   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:40.514462   73230 cri.go:89] found id: ""
	I0906 20:07:40.514489   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.514500   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:40.514511   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:40.514527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:40.553670   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:40.553700   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.608304   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:40.608343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:40.622486   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:40.622514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:40.699408   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:40.699434   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:40.699451   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.278892   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:43.292455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:43.292526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:43.328900   73230 cri.go:89] found id: ""
	I0906 20:07:43.328929   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.328940   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:43.328948   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:43.329009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:43.366728   73230 cri.go:89] found id: ""
	I0906 20:07:43.366754   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.366762   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:43.366768   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:43.366817   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:43.401566   73230 cri.go:89] found id: ""
	I0906 20:07:43.401590   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.401599   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:43.401604   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:43.401650   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:43.437022   73230 cri.go:89] found id: ""
	I0906 20:07:43.437051   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.437063   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:43.437072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:43.437140   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:43.473313   73230 cri.go:89] found id: ""
	I0906 20:07:43.473342   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.473354   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:43.473360   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:43.473420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:43.513590   73230 cri.go:89] found id: ""
	I0906 20:07:43.513616   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.513624   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:43.513630   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:43.513690   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:43.549974   73230 cri.go:89] found id: ""
	I0906 20:07:43.550011   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.550025   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:43.550032   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:43.550100   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:43.592386   73230 cri.go:89] found id: ""
	I0906 20:07:43.592426   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.592444   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:43.592454   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:43.592482   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:43.607804   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:43.607841   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:43.679533   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:43.679568   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:43.679580   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.762111   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:43.762145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:43.802883   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:43.802908   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:42.340252   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:44.838648   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.838831   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.157956   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.657410   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.693014   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.693640   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.191509   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.358429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:46.371252   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:46.371326   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:46.406397   73230 cri.go:89] found id: ""
	I0906 20:07:46.406420   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.406430   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:46.406437   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:46.406496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:46.452186   73230 cri.go:89] found id: ""
	I0906 20:07:46.452209   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.452218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:46.452223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:46.452288   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:46.489418   73230 cri.go:89] found id: ""
	I0906 20:07:46.489443   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.489454   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:46.489461   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:46.489523   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:46.529650   73230 cri.go:89] found id: ""
	I0906 20:07:46.529679   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.529690   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:46.529698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:46.529760   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:46.566429   73230 cri.go:89] found id: ""
	I0906 20:07:46.566454   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.566466   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:46.566474   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:46.566539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:46.604999   73230 cri.go:89] found id: ""
	I0906 20:07:46.605026   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.605034   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:46.605040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:46.605085   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:46.643116   73230 cri.go:89] found id: ""
	I0906 20:07:46.643144   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.643155   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:46.643162   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:46.643222   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:46.679734   73230 cri.go:89] found id: ""
	I0906 20:07:46.679756   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.679764   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:46.679772   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:46.679784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:46.736380   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:46.736430   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:46.750649   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:46.750681   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:46.833098   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:46.833130   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:46.833146   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:46.912223   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:46.912267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.453662   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:49.466520   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:49.466585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:49.508009   73230 cri.go:89] found id: ""
	I0906 20:07:49.508038   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.508049   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:49.508056   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:49.508119   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:49.545875   73230 cri.go:89] found id: ""
	I0906 20:07:49.545900   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.545911   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:49.545918   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:49.545978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:49.584899   73230 cri.go:89] found id: ""
	I0906 20:07:49.584926   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.584933   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:49.584940   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:49.585001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:49.621044   73230 cri.go:89] found id: ""
	I0906 20:07:49.621073   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.621085   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:49.621092   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:49.621146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:49.657074   73230 cri.go:89] found id: ""
	I0906 20:07:49.657099   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.657108   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:49.657115   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:49.657174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:49.693734   73230 cri.go:89] found id: ""
	I0906 20:07:49.693759   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.693767   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:49.693773   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:49.693827   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:49.729920   73230 cri.go:89] found id: ""
	I0906 20:07:49.729950   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.729960   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:49.729965   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:49.730014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:49.765282   73230 cri.go:89] found id: ""
	I0906 20:07:49.765313   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.765324   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:49.765335   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:49.765350   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:49.842509   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:49.842531   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:49.842543   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:49.920670   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:49.920704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.961193   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:49.961220   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:50.014331   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:50.014366   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:48.839877   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:51.339381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.156290   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.157337   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.692055   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:53.191487   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.529758   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:52.543533   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:52.543596   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:52.582802   73230 cri.go:89] found id: ""
	I0906 20:07:52.582826   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.582838   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:52.582845   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:52.582909   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:52.625254   73230 cri.go:89] found id: ""
	I0906 20:07:52.625287   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.625308   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:52.625317   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:52.625383   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:52.660598   73230 cri.go:89] found id: ""
	I0906 20:07:52.660621   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.660632   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:52.660640   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:52.660703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:52.702980   73230 cri.go:89] found id: ""
	I0906 20:07:52.703004   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.703014   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:52.703021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:52.703082   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:52.740361   73230 cri.go:89] found id: ""
	I0906 20:07:52.740387   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.740394   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:52.740400   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:52.740447   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:52.780011   73230 cri.go:89] found id: ""
	I0906 20:07:52.780043   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.780056   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:52.780063   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:52.780123   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:52.825546   73230 cri.go:89] found id: ""
	I0906 20:07:52.825583   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.825595   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:52.825602   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:52.825659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:52.864347   73230 cri.go:89] found id: ""
	I0906 20:07:52.864381   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.864393   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:52.864403   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:52.864417   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:52.943041   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:52.943077   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:52.986158   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:52.986185   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:53.039596   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:53.039635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:53.054265   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:53.054295   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:53.125160   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:53.339887   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.839233   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.657521   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.157101   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.192803   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.692328   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.626058   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:55.639631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:55.639705   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:55.677283   73230 cri.go:89] found id: ""
	I0906 20:07:55.677304   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.677312   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:55.677317   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:55.677372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:55.714371   73230 cri.go:89] found id: ""
	I0906 20:07:55.714402   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.714414   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:55.714422   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:55.714509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:55.753449   73230 cri.go:89] found id: ""
	I0906 20:07:55.753487   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.753500   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:55.753507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:55.753575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:55.792955   73230 cri.go:89] found id: ""
	I0906 20:07:55.792987   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.792999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:55.793006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:55.793074   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:55.827960   73230 cri.go:89] found id: ""
	I0906 20:07:55.827985   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.827996   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:55.828003   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:55.828052   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:55.867742   73230 cri.go:89] found id: ""
	I0906 20:07:55.867765   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.867778   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:55.867785   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:55.867849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:55.907328   73230 cri.go:89] found id: ""
	I0906 20:07:55.907352   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.907359   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:55.907365   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:55.907424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:55.946057   73230 cri.go:89] found id: ""
	I0906 20:07:55.946091   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.946099   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:55.946108   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:55.946119   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:56.033579   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:56.033598   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:56.033611   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:56.116337   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:56.116372   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:56.163397   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:56.163428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:56.217189   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:56.217225   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:58.736147   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:58.749729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:58.749833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:58.786375   73230 cri.go:89] found id: ""
	I0906 20:07:58.786399   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.786406   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:58.786412   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:58.786460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:58.825188   73230 cri.go:89] found id: ""
	I0906 20:07:58.825210   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.825218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:58.825223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:58.825271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:58.866734   73230 cri.go:89] found id: ""
	I0906 20:07:58.866756   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.866764   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:58.866769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:58.866823   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:58.909742   73230 cri.go:89] found id: ""
	I0906 20:07:58.909774   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.909785   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:58.909793   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:58.909850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:58.950410   73230 cri.go:89] found id: ""
	I0906 20:07:58.950438   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.950447   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:58.950452   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:58.950500   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:58.987431   73230 cri.go:89] found id: ""
	I0906 20:07:58.987454   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.987462   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:58.987468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:58.987518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:59.023432   73230 cri.go:89] found id: ""
	I0906 20:07:59.023462   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.023474   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:59.023482   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:59.023544   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:59.057695   73230 cri.go:89] found id: ""
	I0906 20:07:59.057724   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.057734   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:59.057743   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:59.057755   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:59.109634   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:59.109671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:59.125436   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:59.125479   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:59.202018   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:59.202040   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:59.202054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:59.281418   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:59.281456   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:58.339751   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.842794   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.658145   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.155679   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.157913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.192179   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.193068   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:01.823947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:01.839055   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:01.839115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:01.876178   73230 cri.go:89] found id: ""
	I0906 20:08:01.876206   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.876215   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:01.876220   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:01.876274   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:01.912000   73230 cri.go:89] found id: ""
	I0906 20:08:01.912028   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.912038   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:01.912045   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:01.912107   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:01.948382   73230 cri.go:89] found id: ""
	I0906 20:08:01.948412   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.948420   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:01.948426   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:01.948474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:01.982991   73230 cri.go:89] found id: ""
	I0906 20:08:01.983019   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.983028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:01.983033   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:01.983080   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:02.016050   73230 cri.go:89] found id: ""
	I0906 20:08:02.016076   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.016085   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:02.016091   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:02.016151   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:02.051087   73230 cri.go:89] found id: ""
	I0906 20:08:02.051125   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.051137   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:02.051150   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:02.051214   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:02.093230   73230 cri.go:89] found id: ""
	I0906 20:08:02.093254   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.093263   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:02.093268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:02.093323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:02.130580   73230 cri.go:89] found id: ""
	I0906 20:08:02.130609   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.130619   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:02.130629   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:02.130644   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:02.183192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:02.183231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:02.199079   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:02.199110   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:02.274259   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:02.274279   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:02.274303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:02.356198   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:02.356234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:04.899180   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:04.912879   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:04.912955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:04.950598   73230 cri.go:89] found id: ""
	I0906 20:08:04.950632   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.950642   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:04.950656   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:04.950713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:04.986474   73230 cri.go:89] found id: ""
	I0906 20:08:04.986504   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.986513   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:04.986519   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:04.986570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:05.025837   73230 cri.go:89] found id: ""
	I0906 20:08:05.025868   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.025877   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:05.025884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:05.025934   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:05.063574   73230 cri.go:89] found id: ""
	I0906 20:08:05.063613   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.063622   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:05.063628   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:05.063674   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:05.101341   73230 cri.go:89] found id: ""
	I0906 20:08:05.101371   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.101383   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:05.101390   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:05.101461   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:05.148551   73230 cri.go:89] found id: ""
	I0906 20:08:05.148580   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.148591   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:05.148599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:05.148668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:05.186907   73230 cri.go:89] found id: ""
	I0906 20:08:05.186935   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.186945   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:05.186953   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:05.187019   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:05.226237   73230 cri.go:89] found id: ""
	I0906 20:08:05.226265   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.226275   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:05.226287   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:05.226300   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:05.242892   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:05.242925   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:05.317797   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:05.317824   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:05.317839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:05.400464   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:05.400500   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:05.442632   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:05.442657   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:03.340541   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:05.840156   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.655913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:06.657424   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.691255   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.191739   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.998033   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:08.012363   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:08.012441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:08.048816   73230 cri.go:89] found id: ""
	I0906 20:08:08.048847   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.048876   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:08.048884   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:08.048947   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:08.109623   73230 cri.go:89] found id: ""
	I0906 20:08:08.109650   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.109661   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:08.109668   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:08.109730   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:08.145405   73230 cri.go:89] found id: ""
	I0906 20:08:08.145432   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.145443   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:08.145451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:08.145514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:08.187308   73230 cri.go:89] found id: ""
	I0906 20:08:08.187344   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.187355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:08.187362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:08.187422   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:08.228782   73230 cri.go:89] found id: ""
	I0906 20:08:08.228815   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.228826   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:08.228833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:08.228918   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:08.269237   73230 cri.go:89] found id: ""
	I0906 20:08:08.269266   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.269276   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:08.269285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:08.269351   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:08.305115   73230 cri.go:89] found id: ""
	I0906 20:08:08.305141   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.305149   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:08.305155   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:08.305206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:08.345442   73230 cri.go:89] found id: ""
	I0906 20:08:08.345472   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.345483   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:08.345494   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:08.345510   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:08.396477   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:08.396518   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:08.410978   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:08.411002   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:08.486220   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:08.486247   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:08.486265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:08.574138   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:08.574190   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:08.339280   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:10.340142   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.156809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.160037   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.192303   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.192456   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.192684   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.117545   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:11.131884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:11.131944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:11.169481   73230 cri.go:89] found id: ""
	I0906 20:08:11.169507   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.169518   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:11.169525   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:11.169590   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:11.211068   73230 cri.go:89] found id: ""
	I0906 20:08:11.211092   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.211100   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:11.211105   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:11.211157   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:11.250526   73230 cri.go:89] found id: ""
	I0906 20:08:11.250560   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.250574   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:11.250580   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:11.250627   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:11.289262   73230 cri.go:89] found id: ""
	I0906 20:08:11.289284   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.289292   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:11.289299   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:11.289346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:11.335427   73230 cri.go:89] found id: ""
	I0906 20:08:11.335456   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.335467   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:11.335475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:11.335535   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:11.375481   73230 cri.go:89] found id: ""
	I0906 20:08:11.375509   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.375518   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:11.375524   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:11.375575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:11.416722   73230 cri.go:89] found id: ""
	I0906 20:08:11.416748   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.416758   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:11.416765   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:11.416830   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:11.452986   73230 cri.go:89] found id: ""
	I0906 20:08:11.453019   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.453030   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:11.453042   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:11.453059   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:11.466435   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:11.466461   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:11.545185   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:11.545212   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:11.545231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:11.627390   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:11.627422   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:11.674071   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:11.674098   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.225887   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:14.242121   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:14.242200   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:14.283024   73230 cri.go:89] found id: ""
	I0906 20:08:14.283055   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.283067   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:14.283074   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:14.283135   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:14.325357   73230 cri.go:89] found id: ""
	I0906 20:08:14.325379   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.325387   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:14.325392   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:14.325455   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:14.362435   73230 cri.go:89] found id: ""
	I0906 20:08:14.362459   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.362467   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:14.362473   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:14.362537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:14.398409   73230 cri.go:89] found id: ""
	I0906 20:08:14.398441   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.398450   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:14.398455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:14.398509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:14.434902   73230 cri.go:89] found id: ""
	I0906 20:08:14.434934   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.434943   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:14.434950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:14.435009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:14.476605   73230 cri.go:89] found id: ""
	I0906 20:08:14.476635   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.476647   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:14.476655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:14.476717   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:14.533656   73230 cri.go:89] found id: ""
	I0906 20:08:14.533681   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.533690   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:14.533696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:14.533753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:14.599661   73230 cri.go:89] found id: ""
	I0906 20:08:14.599685   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.599693   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:14.599702   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:14.599715   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.657680   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:14.657712   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:14.671594   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:14.671624   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:14.747945   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:14.747969   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:14.747979   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:14.829021   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:14.829057   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:12.838805   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:14.839569   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.659405   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:16.156840   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:15.692205   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.693709   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.373569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:17.388910   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:17.388987   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:17.428299   73230 cri.go:89] found id: ""
	I0906 20:08:17.428335   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.428347   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:17.428354   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:17.428419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:17.464660   73230 cri.go:89] found id: ""
	I0906 20:08:17.464685   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.464692   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:17.464697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:17.464758   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:17.500018   73230 cri.go:89] found id: ""
	I0906 20:08:17.500047   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.500059   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:17.500067   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:17.500130   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:17.536345   73230 cri.go:89] found id: ""
	I0906 20:08:17.536375   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.536386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:17.536394   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:17.536456   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:17.574668   73230 cri.go:89] found id: ""
	I0906 20:08:17.574696   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.574707   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:17.574715   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:17.574780   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:17.611630   73230 cri.go:89] found id: ""
	I0906 20:08:17.611653   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.611663   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:17.611669   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:17.611713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:17.647610   73230 cri.go:89] found id: ""
	I0906 20:08:17.647639   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.647649   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:17.647657   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:17.647724   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:17.686204   73230 cri.go:89] found id: ""
	I0906 20:08:17.686233   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.686246   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:17.686260   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:17.686273   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:17.702040   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:17.702069   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:17.775033   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:17.775058   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:17.775074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:17.862319   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:17.862359   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:17.905567   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:17.905604   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:17.339116   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:19.839554   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:21.839622   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:18.157104   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.657604   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.191024   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:22.192687   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.457191   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:20.471413   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:20.471474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:20.533714   73230 cri.go:89] found id: ""
	I0906 20:08:20.533749   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.533765   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:20.533772   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:20.533833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:20.580779   73230 cri.go:89] found id: ""
	I0906 20:08:20.580811   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.580823   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:20.580830   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:20.580902   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:20.619729   73230 cri.go:89] found id: ""
	I0906 20:08:20.619755   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.619763   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:20.619769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:20.619816   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:20.661573   73230 cri.go:89] found id: ""
	I0906 20:08:20.661599   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.661606   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:20.661612   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:20.661664   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:20.709409   73230 cri.go:89] found id: ""
	I0906 20:08:20.709443   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.709455   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:20.709463   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:20.709515   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:20.746743   73230 cri.go:89] found id: ""
	I0906 20:08:20.746783   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.746808   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:20.746816   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:20.746891   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:20.788129   73230 cri.go:89] found id: ""
	I0906 20:08:20.788155   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.788164   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:20.788170   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:20.788217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:20.825115   73230 cri.go:89] found id: ""
	I0906 20:08:20.825139   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.825147   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:20.825156   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:20.825167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:20.880975   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:20.881013   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:20.895027   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:20.895061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:20.972718   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:20.972739   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:20.972754   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:21.053062   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:21.053096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:23.595439   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:23.612354   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:23.612419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:23.654479   73230 cri.go:89] found id: ""
	I0906 20:08:23.654508   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.654519   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:23.654526   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:23.654591   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:23.690061   73230 cri.go:89] found id: ""
	I0906 20:08:23.690092   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.690103   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:23.690112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:23.690173   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:23.726644   73230 cri.go:89] found id: ""
	I0906 20:08:23.726670   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.726678   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:23.726684   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:23.726744   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:23.763348   73230 cri.go:89] found id: ""
	I0906 20:08:23.763378   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.763386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:23.763391   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:23.763452   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:23.799260   73230 cri.go:89] found id: ""
	I0906 20:08:23.799290   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.799299   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:23.799305   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:23.799359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:23.843438   73230 cri.go:89] found id: ""
	I0906 20:08:23.843470   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.843481   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:23.843489   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:23.843558   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:23.879818   73230 cri.go:89] found id: ""
	I0906 20:08:23.879847   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.879856   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:23.879867   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:23.879933   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:23.916182   73230 cri.go:89] found id: ""
	I0906 20:08:23.916207   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.916220   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:23.916229   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:23.916240   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:23.987003   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:23.987022   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:23.987033   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:24.073644   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:24.073684   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:24.118293   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:24.118328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:24.172541   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:24.172582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:23.840441   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.338539   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:23.155661   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:25.155855   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:27.157624   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:24.692350   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.692534   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.687747   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:26.702174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:26.702238   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:26.740064   73230 cri.go:89] found id: ""
	I0906 20:08:26.740093   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.740101   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:26.740108   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:26.740158   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:26.775198   73230 cri.go:89] found id: ""
	I0906 20:08:26.775227   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.775237   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:26.775244   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:26.775303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:26.808850   73230 cri.go:89] found id: ""
	I0906 20:08:26.808892   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.808903   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:26.808915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:26.808974   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:26.842926   73230 cri.go:89] found id: ""
	I0906 20:08:26.842953   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.842964   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:26.842972   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:26.843031   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:26.878621   73230 cri.go:89] found id: ""
	I0906 20:08:26.878649   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.878658   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:26.878664   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:26.878713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:26.921816   73230 cri.go:89] found id: ""
	I0906 20:08:26.921862   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.921875   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:26.921884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:26.921952   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:26.960664   73230 cri.go:89] found id: ""
	I0906 20:08:26.960692   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.960702   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:26.960709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:26.960771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:27.004849   73230 cri.go:89] found id: ""
	I0906 20:08:27.004904   73230 logs.go:276] 0 containers: []
	W0906 20:08:27.004913   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:27.004922   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:27.004934   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:27.056237   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:27.056267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:27.071882   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:27.071904   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:27.143927   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:27.143949   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:27.143961   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:27.223901   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:27.223935   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:29.766615   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:29.780295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:29.780367   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:29.817745   73230 cri.go:89] found id: ""
	I0906 20:08:29.817775   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.817784   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:29.817790   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:29.817852   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:29.855536   73230 cri.go:89] found id: ""
	I0906 20:08:29.855559   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.855567   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:29.855572   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:29.855628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:29.895043   73230 cri.go:89] found id: ""
	I0906 20:08:29.895092   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.895104   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:29.895111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:29.895178   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:29.939225   73230 cri.go:89] found id: ""
	I0906 20:08:29.939248   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.939256   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:29.939262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:29.939331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:29.974166   73230 cri.go:89] found id: ""
	I0906 20:08:29.974190   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.974198   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:29.974203   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:29.974258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:30.009196   73230 cri.go:89] found id: ""
	I0906 20:08:30.009226   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.009237   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:30.009245   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:30.009310   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:30.043939   73230 cri.go:89] found id: ""
	I0906 20:08:30.043962   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.043970   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:30.043976   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:30.044023   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:30.080299   73230 cri.go:89] found id: ""
	I0906 20:08:30.080328   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.080336   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:30.080345   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:30.080356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:30.131034   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:30.131068   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:30.145502   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:30.145536   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:30.219941   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:30.219963   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:30.219978   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:30.307958   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:30.307995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:28.839049   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.338815   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.656748   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.657112   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.192284   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.193181   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.854002   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:32.867937   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:32.867998   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:32.906925   73230 cri.go:89] found id: ""
	I0906 20:08:32.906957   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.906969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:32.906976   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:32.907038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:32.946662   73230 cri.go:89] found id: ""
	I0906 20:08:32.946691   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.946702   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:32.946710   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:32.946771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:32.981908   73230 cri.go:89] found id: ""
	I0906 20:08:32.981936   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.981944   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:32.981950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:32.982001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:33.014902   73230 cri.go:89] found id: ""
	I0906 20:08:33.014930   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.014939   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:33.014945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:33.015055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:33.051265   73230 cri.go:89] found id: ""
	I0906 20:08:33.051290   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.051298   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:33.051310   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:33.051363   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:33.085436   73230 cri.go:89] found id: ""
	I0906 20:08:33.085468   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.085480   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:33.085487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:33.085552   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:33.121483   73230 cri.go:89] found id: ""
	I0906 20:08:33.121509   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.121517   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:33.121523   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:33.121578   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:33.159883   73230 cri.go:89] found id: ""
	I0906 20:08:33.159915   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.159926   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:33.159937   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:33.159953   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:33.174411   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:33.174442   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:33.243656   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:33.243694   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:33.243710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:33.321782   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:33.321823   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:33.363299   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:33.363335   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:33.339645   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.839545   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.650358   72441 pod_ready.go:82] duration metric: took 4m0.000296679s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:32.650386   72441 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:32.650410   72441 pod_ready.go:39] duration metric: took 4m12.042795571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:32.650440   72441 kubeadm.go:597] duration metric: took 4m19.97234293s to restartPrimaryControlPlane
	W0906 20:08:32.650505   72441 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:32.650542   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:33.692877   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:36.192090   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:38.192465   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.916159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:35.929190   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:35.929265   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:35.962853   73230 cri.go:89] found id: ""
	I0906 20:08:35.962890   73230 logs.go:276] 0 containers: []
	W0906 20:08:35.962901   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:35.962909   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:35.962969   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:36.000265   73230 cri.go:89] found id: ""
	I0906 20:08:36.000309   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.000318   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:36.000324   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:36.000374   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:36.042751   73230 cri.go:89] found id: ""
	I0906 20:08:36.042781   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.042792   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:36.042800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:36.042859   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:36.077922   73230 cri.go:89] found id: ""
	I0906 20:08:36.077957   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.077967   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:36.077975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:36.078038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:36.114890   73230 cri.go:89] found id: ""
	I0906 20:08:36.114926   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.114937   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:36.114945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:36.114997   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:36.148058   73230 cri.go:89] found id: ""
	I0906 20:08:36.148089   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.148101   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:36.148108   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:36.148167   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:36.187334   73230 cri.go:89] found id: ""
	I0906 20:08:36.187361   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.187371   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:36.187379   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:36.187498   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:36.221295   73230 cri.go:89] found id: ""
	I0906 20:08:36.221331   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.221342   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:36.221353   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:36.221367   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:36.273489   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:36.273527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:36.287975   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:36.288005   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:36.366914   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:36.366937   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:36.366950   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:36.446582   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:36.446619   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.987075   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:39.001051   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:39.001113   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:39.038064   73230 cri.go:89] found id: ""
	I0906 20:08:39.038093   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.038103   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:39.038110   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:39.038175   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:39.075759   73230 cri.go:89] found id: ""
	I0906 20:08:39.075788   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.075799   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:39.075805   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:39.075866   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:39.113292   73230 cri.go:89] found id: ""
	I0906 20:08:39.113320   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.113331   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:39.113339   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:39.113404   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:39.157236   73230 cri.go:89] found id: ""
	I0906 20:08:39.157269   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.157281   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:39.157289   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:39.157362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:39.195683   73230 cri.go:89] found id: ""
	I0906 20:08:39.195704   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.195712   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:39.195717   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:39.195763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:39.234865   73230 cri.go:89] found id: ""
	I0906 20:08:39.234894   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.234903   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:39.234909   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:39.234961   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:39.269946   73230 cri.go:89] found id: ""
	I0906 20:08:39.269975   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.269983   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:39.269989   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:39.270034   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:39.306184   73230 cri.go:89] found id: ""
	I0906 20:08:39.306214   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.306225   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:39.306235   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:39.306249   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:39.357887   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:39.357920   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:39.371736   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:39.371767   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:39.445674   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:39.445695   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:39.445708   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:39.525283   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:39.525316   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.343370   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.839247   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.691846   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.694807   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.069066   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:42.083229   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:42.083313   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:42.124243   73230 cri.go:89] found id: ""
	I0906 20:08:42.124267   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.124275   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:42.124280   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:42.124330   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:42.162070   73230 cri.go:89] found id: ""
	I0906 20:08:42.162102   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.162113   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:42.162120   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:42.162183   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:42.199161   73230 cri.go:89] found id: ""
	I0906 20:08:42.199191   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.199201   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:42.199208   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:42.199266   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:42.236956   73230 cri.go:89] found id: ""
	I0906 20:08:42.236980   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.236991   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:42.236996   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:42.237068   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:42.272299   73230 cri.go:89] found id: ""
	I0906 20:08:42.272328   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.272336   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:42.272341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:42.272400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:42.310280   73230 cri.go:89] found id: ""
	I0906 20:08:42.310304   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.310312   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:42.310317   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:42.310362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:42.345850   73230 cri.go:89] found id: ""
	I0906 20:08:42.345873   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.345881   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:42.345887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:42.345937   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:42.380785   73230 cri.go:89] found id: ""
	I0906 20:08:42.380812   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.380820   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:42.380830   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:42.380843   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.435803   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:42.435839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:42.450469   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:42.450498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:42.521565   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:42.521587   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:42.521599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:42.595473   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:42.595508   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:45.136985   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:45.150468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:45.150540   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:45.186411   73230 cri.go:89] found id: ""
	I0906 20:08:45.186440   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.186448   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:45.186454   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:45.186521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:45.224463   73230 cri.go:89] found id: ""
	I0906 20:08:45.224495   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.224506   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:45.224513   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:45.224568   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:45.262259   73230 cri.go:89] found id: ""
	I0906 20:08:45.262286   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.262295   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:45.262301   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:45.262357   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:45.299463   73230 cri.go:89] found id: ""
	I0906 20:08:45.299492   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.299501   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:45.299507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:45.299561   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:45.336125   73230 cri.go:89] found id: ""
	I0906 20:08:45.336153   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.336162   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:45.336168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:45.336216   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:45.370397   73230 cri.go:89] found id: ""
	I0906 20:08:45.370427   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.370439   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:45.370448   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:45.370518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:45.406290   73230 cri.go:89] found id: ""
	I0906 20:08:45.406322   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.406333   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:45.406341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:45.406402   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:45.441560   73230 cri.go:89] found id: ""
	I0906 20:08:45.441592   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.441603   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:45.441614   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:45.441627   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.840127   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.349331   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.192059   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:47.691416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.508769   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:45.508811   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:45.523659   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:45.523696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:45.595544   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:45.595567   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:45.595582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:45.676060   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:45.676096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:48.216490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:48.230021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:48.230093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:48.267400   73230 cri.go:89] found id: ""
	I0906 20:08:48.267433   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.267444   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:48.267451   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:48.267519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:48.314694   73230 cri.go:89] found id: ""
	I0906 20:08:48.314722   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.314731   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:48.314739   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:48.314805   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:48.358861   73230 cri.go:89] found id: ""
	I0906 20:08:48.358895   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.358906   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:48.358915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:48.358990   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:48.398374   73230 cri.go:89] found id: ""
	I0906 20:08:48.398400   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.398410   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:48.398416   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:48.398488   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:48.438009   73230 cri.go:89] found id: ""
	I0906 20:08:48.438039   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.438050   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:48.438058   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:48.438115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:48.475970   73230 cri.go:89] found id: ""
	I0906 20:08:48.475998   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.476007   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:48.476013   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:48.476071   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:48.512191   73230 cri.go:89] found id: ""
	I0906 20:08:48.512220   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.512230   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:48.512237   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:48.512299   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:48.547820   73230 cri.go:89] found id: ""
	I0906 20:08:48.547850   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.547861   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:48.547872   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:48.547886   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:48.616962   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:48.616997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:48.631969   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:48.631998   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:48.717025   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:48.717043   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:48.717054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:48.796131   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:48.796167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:47.838558   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.839063   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.839099   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.693239   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:52.191416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.342030   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:51.355761   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:51.355845   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:51.395241   73230 cri.go:89] found id: ""
	I0906 20:08:51.395272   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.395283   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:51.395290   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:51.395350   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:51.433860   73230 cri.go:89] found id: ""
	I0906 20:08:51.433888   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.433897   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:51.433904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:51.433968   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:51.475568   73230 cri.go:89] found id: ""
	I0906 20:08:51.475598   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.475608   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:51.475615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:51.475678   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:51.512305   73230 cri.go:89] found id: ""
	I0906 20:08:51.512329   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.512337   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:51.512342   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:51.512391   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:51.545796   73230 cri.go:89] found id: ""
	I0906 20:08:51.545819   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.545827   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:51.545833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:51.545884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:51.578506   73230 cri.go:89] found id: ""
	I0906 20:08:51.578531   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.578539   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:51.578545   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:51.578609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:51.616571   73230 cri.go:89] found id: ""
	I0906 20:08:51.616596   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.616609   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:51.616615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:51.616660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:51.651542   73230 cri.go:89] found id: ""
	I0906 20:08:51.651566   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.651580   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:51.651588   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:51.651599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:51.705160   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:51.705193   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:51.719450   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:51.719477   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:51.789775   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:51.789796   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:51.789809   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:51.870123   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:51.870158   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.411818   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:54.425759   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:54.425818   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:54.467920   73230 cri.go:89] found id: ""
	I0906 20:08:54.467943   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.467951   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:54.467956   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:54.468008   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:54.508324   73230 cri.go:89] found id: ""
	I0906 20:08:54.508349   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.508357   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:54.508363   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:54.508410   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:54.544753   73230 cri.go:89] found id: ""
	I0906 20:08:54.544780   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.544790   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:54.544797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:54.544884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:54.581407   73230 cri.go:89] found id: ""
	I0906 20:08:54.581436   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.581446   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:54.581453   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:54.581514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:54.618955   73230 cri.go:89] found id: ""
	I0906 20:08:54.618986   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.618998   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:54.619006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:54.619065   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:54.656197   73230 cri.go:89] found id: ""
	I0906 20:08:54.656229   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.656248   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:54.656255   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:54.656316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:54.697499   73230 cri.go:89] found id: ""
	I0906 20:08:54.697536   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.697544   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:54.697549   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:54.697600   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:54.734284   73230 cri.go:89] found id: ""
	I0906 20:08:54.734313   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.734331   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:54.734342   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:54.734356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:54.811079   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:54.811100   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:54.811111   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:54.887309   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:54.887346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.930465   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:54.930499   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:55.000240   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:55.000303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:54.339076   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:54.833352   72867 pod_ready.go:82] duration metric: took 4m0.000854511s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:54.833398   72867 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:54.833423   72867 pod_ready.go:39] duration metric: took 4m14.79685184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:54.833458   72867 kubeadm.go:597] duration metric: took 4m22.254900492s to restartPrimaryControlPlane
	W0906 20:08:54.833525   72867 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:54.833576   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:54.192038   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:56.192120   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:58.193505   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:57.530956   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:57.544056   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:57.544136   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:57.584492   73230 cri.go:89] found id: ""
	I0906 20:08:57.584519   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.584528   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:57.584534   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:57.584585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:57.620220   73230 cri.go:89] found id: ""
	I0906 20:08:57.620250   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.620259   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:57.620265   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:57.620321   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:57.655245   73230 cri.go:89] found id: ""
	I0906 20:08:57.655268   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.655283   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:57.655288   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:57.655346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:57.690439   73230 cri.go:89] found id: ""
	I0906 20:08:57.690470   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.690481   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:57.690487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:57.690551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:57.728179   73230 cri.go:89] found id: ""
	I0906 20:08:57.728206   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.728214   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:57.728221   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:57.728270   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:57.763723   73230 cri.go:89] found id: ""
	I0906 20:08:57.763752   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.763761   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:57.763767   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:57.763825   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:57.799836   73230 cri.go:89] found id: ""
	I0906 20:08:57.799861   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.799869   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:57.799876   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:57.799922   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:57.834618   73230 cri.go:89] found id: ""
	I0906 20:08:57.834644   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.834651   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:57.834660   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:57.834671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:57.887297   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:57.887331   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:57.901690   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:57.901717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:57.969179   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:57.969209   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:57.969223   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:58.052527   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:58.052642   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:58.870446   72441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.219876198s)
	I0906 20:08:58.870530   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:08:58.888197   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:08:58.899185   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:08:58.909740   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:08:58.909762   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:08:58.909806   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:08:58.919589   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:08:58.919646   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:08:58.930386   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:08:58.940542   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:08:58.940621   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:08:58.951673   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.963471   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:08:58.963545   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.974638   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:08:58.984780   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:08:58.984843   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:08:58.995803   72441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:08:59.046470   72441 kubeadm.go:310] W0906 20:08:59.003226    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.047297   72441 kubeadm.go:310] W0906 20:08:59.004193    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.166500   72441 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:00.691499   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:02.692107   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:00.593665   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:00.608325   73230 kubeadm.go:597] duration metric: took 4m4.153407014s to restartPrimaryControlPlane
	W0906 20:09:00.608399   73230 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:00.608428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:05.878028   73230 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.269561172s)
	I0906 20:09:05.878112   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:05.893351   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:05.904668   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:05.915560   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:05.915583   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:05.915633   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:09:05.926566   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:05.926625   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:05.937104   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:09:05.946406   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:05.946467   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:05.956203   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.965691   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:05.965751   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.976210   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:09:05.986104   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:05.986174   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:05.996282   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:06.068412   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:09:06.068507   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:06.213882   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:06.214044   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:06.214191   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:09:06.406793   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.067295   72441 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:07.067370   72441 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:07.067449   72441 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:07.067595   72441 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:07.067737   72441 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:07.067795   72441 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.069381   72441 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:07.069477   72441 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:07.069559   72441 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:07.069652   72441 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:07.069733   72441 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:07.069825   72441 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:07.069898   72441 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:07.069981   72441 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:07.070068   72441 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:07.070178   72441 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:07.070279   72441 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:07.070349   72441 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:07.070424   72441 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:07.070494   72441 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:07.070592   72441 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:07.070669   72441 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.070755   72441 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.070828   72441 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.070916   72441 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.070972   72441 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:07.072214   72441 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.072317   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.072399   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.072487   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.072613   72441 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.072685   72441 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.072719   72441 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.072837   72441 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:07.072977   72441 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:07.073063   72441 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.515053ms
	I0906 20:09:07.073178   72441 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:07.073257   72441 kubeadm.go:310] [api-check] The API server is healthy after 5.001748851s
	I0906 20:09:07.073410   72441 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:07.073558   72441 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:07.073650   72441 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:07.073860   72441 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-458066 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:07.073936   72441 kubeadm.go:310] [bootstrap-token] Using token: 3t2lf6.w44vkc4kfppuo2gp
	I0906 20:09:07.075394   72441 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:07.075524   72441 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:07.075621   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:07.075738   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:07.075905   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:07.076003   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:07.076094   72441 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:07.076222   72441 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:07.076397   72441 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:07.076486   72441 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:07.076502   72441 kubeadm.go:310] 
	I0906 20:09:07.076579   72441 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:07.076594   72441 kubeadm.go:310] 
	I0906 20:09:07.076687   72441 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:07.076698   72441 kubeadm.go:310] 
	I0906 20:09:07.076727   72441 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:07.076810   72441 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:07.076893   72441 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:07.076900   72441 kubeadm.go:310] 
	I0906 20:09:07.077016   72441 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:07.077029   72441 kubeadm.go:310] 
	I0906 20:09:07.077090   72441 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:07.077105   72441 kubeadm.go:310] 
	I0906 20:09:07.077172   72441 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:07.077273   72441 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:07.077368   72441 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:07.077377   72441 kubeadm.go:310] 
	I0906 20:09:07.077496   72441 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:07.077589   72441 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:07.077600   72441 kubeadm.go:310] 
	I0906 20:09:07.077680   72441 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.077767   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:07.077807   72441 kubeadm.go:310] 	--control-plane 
	I0906 20:09:07.077817   72441 kubeadm.go:310] 
	I0906 20:09:07.077927   72441 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:07.077946   72441 kubeadm.go:310] 
	I0906 20:09:07.078053   72441 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.078191   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:07.078206   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:09:07.078216   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:07.079782   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:07.080965   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:07.092500   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:07.112546   72441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:07.112618   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:07.112648   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-458066 minikube.k8s.io/updated_at=2024_09_06T20_09_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=embed-certs-458066 minikube.k8s.io/primary=true
	I0906 20:09:07.343125   72441 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:07.343284   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:06.408933   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:06.409043   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:06.409126   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:06.409242   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:06.409351   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:06.409445   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:06.409559   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:06.409666   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:06.409758   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:06.409870   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:06.409964   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:06.410010   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:06.410101   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:06.721268   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:06.888472   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.414908   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.505887   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.525704   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.525835   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.525913   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.699971   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:04.692422   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.193312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.701970   73230 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.702095   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.708470   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.710216   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.711016   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.714706   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:09:07.844097   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.344174   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.843884   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.343591   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.843748   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.344148   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.844002   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.343424   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.444023   72441 kubeadm.go:1113] duration metric: took 4.331471016s to wait for elevateKubeSystemPrivileges
	I0906 20:09:11.444067   72441 kubeadm.go:394] duration metric: took 4m58.815096997s to StartCluster
	I0906 20:09:11.444093   72441 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.444186   72441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:11.446093   72441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.446360   72441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:11.446430   72441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:11.446521   72441 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-458066"
	I0906 20:09:11.446542   72441 addons.go:69] Setting default-storageclass=true in profile "embed-certs-458066"
	I0906 20:09:11.446560   72441 addons.go:69] Setting metrics-server=true in profile "embed-certs-458066"
	I0906 20:09:11.446609   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:11.446615   72441 addons.go:234] Setting addon metrics-server=true in "embed-certs-458066"
	W0906 20:09:11.446663   72441 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:11.446694   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.446576   72441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-458066"
	I0906 20:09:11.446570   72441 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-458066"
	W0906 20:09:11.446779   72441 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:11.446810   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.447077   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447112   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447170   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447211   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447350   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447426   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447879   72441 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:11.449461   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:11.463673   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I0906 20:09:11.463676   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0906 20:09:11.464129   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464231   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464669   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464691   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.464675   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464745   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.465097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465139   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465608   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465634   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.465731   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465778   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.466622   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0906 20:09:11.466967   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.467351   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.467366   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.467622   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.467759   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.471093   72441 addons.go:234] Setting addon default-storageclass=true in "embed-certs-458066"
	W0906 20:09:11.471115   72441 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:11.471145   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.471524   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.471543   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.488980   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0906 20:09:11.489014   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0906 20:09:11.489399   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0906 20:09:11.489465   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489517   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489908   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.490116   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490134   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490144   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490158   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490411   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490427   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490481   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490872   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490886   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.491406   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.491500   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.491520   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.491619   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.493485   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.493901   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.495272   72441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:11.495274   72441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:11.496553   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:11.496575   72441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:11.496597   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.496647   72441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.496667   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:11.496684   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.500389   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500395   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500469   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.500786   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500808   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500952   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501105   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.501145   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501259   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501305   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.501389   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501501   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.510188   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0906 20:09:11.510617   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.511142   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.511169   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.511539   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.511754   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.513207   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.513439   72441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.513455   72441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:11.513474   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.516791   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517292   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.517323   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517563   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.517898   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.518085   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.518261   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.669057   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:11.705086   72441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731651   72441 node_ready.go:49] node "embed-certs-458066" has status "Ready":"True"
	I0906 20:09:11.731679   72441 node_ready.go:38] duration metric: took 26.546983ms for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731691   72441 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:11.740680   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:11.767740   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:11.767760   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:11.771571   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.804408   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:11.804435   72441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:11.844160   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.856217   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:11.856240   72441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:11.899134   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:13.159543   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.315345353s)
	I0906 20:09:13.159546   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387931315s)
	I0906 20:09:13.159639   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159660   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159601   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159711   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.159985   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.159997   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160008   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160018   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160080   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160095   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160104   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160115   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160265   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160289   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160401   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160417   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185478   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.185512   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.185914   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.185934   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185949   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.228561   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.329382232s)
	I0906 20:09:13.228621   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.228636   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228924   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.228978   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.228991   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.229001   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.229229   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.229258   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.229270   72441 addons.go:475] Verifying addon metrics-server=true in "embed-certs-458066"
	I0906 20:09:13.230827   72441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:09.691281   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:11.692514   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:13.231988   72441 addons.go:510] duration metric: took 1.785558897s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:13.750043   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.247314   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.748039   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:16.748064   72441 pod_ready.go:82] duration metric: took 5.007352361s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:16.748073   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:14.192167   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.691856   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:18.754580   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:19.254643   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:19.254669   72441 pod_ready.go:82] duration metric: took 2.506589666s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:19.254680   72441 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762162   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.762188   72441 pod_ready.go:82] duration metric: took 1.507501384s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762202   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770835   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.770860   72441 pod_ready.go:82] duration metric: took 8.65029ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770872   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779692   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.779713   72441 pod_ready.go:82] duration metric: took 8.832607ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779725   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786119   72441 pod_ready.go:93] pod "kube-proxy-rzx2f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.786146   72441 pod_ready.go:82] duration metric: took 6.414063ms for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786158   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852593   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.852630   72441 pod_ready.go:82] duration metric: took 66.461213ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852642   72441 pod_ready.go:39] duration metric: took 9.120937234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:20.852663   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:20.852729   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:20.871881   72441 api_server.go:72] duration metric: took 9.425481233s to wait for apiserver process to appear ...
	I0906 20:09:20.871911   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:20.871927   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:09:20.876997   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:09:20.878290   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:20.878314   72441 api_server.go:131] duration metric: took 6.396943ms to wait for apiserver health ...
	I0906 20:09:20.878324   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:21.057265   72441 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:21.057303   72441 system_pods.go:61] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.057312   72441 system_pods.go:61] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.057319   72441 system_pods.go:61] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.057326   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.057332   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.057338   72441 system_pods.go:61] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.057345   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.057356   72441 system_pods.go:61] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.057367   72441 system_pods.go:61] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.057381   72441 system_pods.go:74] duration metric: took 179.050809ms to wait for pod list to return data ...
	I0906 20:09:21.057394   72441 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:21.252816   72441 default_sa.go:45] found service account: "default"
	I0906 20:09:21.252842   72441 default_sa.go:55] duration metric: took 195.436403ms for default service account to be created ...
	I0906 20:09:21.252851   72441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:21.455714   72441 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:21.455742   72441 system_pods.go:89] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.455748   72441 system_pods.go:89] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.455752   72441 system_pods.go:89] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.455755   72441 system_pods.go:89] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.455759   72441 system_pods.go:89] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.455763   72441 system_pods.go:89] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.455766   72441 system_pods.go:89] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.455772   72441 system_pods.go:89] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.455776   72441 system_pods.go:89] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.455784   72441 system_pods.go:126] duration metric: took 202.909491ms to wait for k8s-apps to be running ...
	I0906 20:09:21.455791   72441 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:21.455832   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.474124   72441 system_svc.go:56] duration metric: took 18.325386ms WaitForService to wait for kubelet
	I0906 20:09:21.474150   72441 kubeadm.go:582] duration metric: took 10.027757317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:21.474172   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:21.653674   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:21.653697   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:21.653708   72441 node_conditions.go:105] duration metric: took 179.531797ms to run NodePressure ...
	I0906 20:09:21.653718   72441 start.go:241] waiting for startup goroutines ...
	I0906 20:09:21.653727   72441 start.go:246] waiting for cluster config update ...
	I0906 20:09:21.653740   72441 start.go:255] writing updated cluster config ...
	I0906 20:09:21.654014   72441 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:21.703909   72441 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:21.705502   72441 out.go:177] * Done! kubectl is now configured to use "embed-certs-458066" cluster and "default" namespace by default
	I0906 20:09:21.102986   72867 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.269383553s)
	I0906 20:09:21.103094   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.118935   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:21.129099   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:21.139304   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:21.139326   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:21.139374   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:09:21.149234   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:21.149289   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:21.160067   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:09:21.169584   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:21.169664   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:21.179885   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.190994   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:21.191062   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.201649   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:09:21.211165   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:21.211223   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:21.220998   72867 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:21.269780   72867 kubeadm.go:310] W0906 20:09:21.240800    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.270353   72867 kubeadm.go:310] W0906 20:09:21.241533    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.389445   72867 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:18.692475   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:21.193075   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:23.697031   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:26.191208   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:28.192166   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:30.493468   72867 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:30.493543   72867 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:30.493620   72867 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:30.493751   72867 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:30.493891   72867 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:30.493971   72867 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:30.495375   72867 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:30.495467   72867 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:30.495537   72867 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:30.495828   72867 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:30.495913   72867 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:30.495977   72867 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:30.496024   72867 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:30.496112   72867 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:30.496207   72867 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:30.496308   72867 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:30.496400   72867 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:30.496452   72867 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:30.496519   72867 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:30.496601   72867 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:30.496690   72867 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:30.496774   72867 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:30.496887   72867 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:30.496946   72867 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:30.497018   72867 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:30.497074   72867 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:30.498387   72867 out.go:235]   - Booting up control plane ...
	I0906 20:09:30.498472   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:30.498550   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:30.498616   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:30.498715   72867 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:30.498786   72867 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:30.498821   72867 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:30.498969   72867 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:30.499076   72867 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:30.499126   72867 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.325552ms
	I0906 20:09:30.499189   72867 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:30.499269   72867 kubeadm.go:310] [api-check] The API server is healthy after 5.002261512s
	I0906 20:09:30.499393   72867 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:30.499507   72867 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:30.499586   72867 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:30.499818   72867 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:30.499915   72867 kubeadm.go:310] [bootstrap-token] Using token: 6yha4r.f9kcjkhkq2u0pp1e
	I0906 20:09:30.501217   72867 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:30.501333   72867 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:30.501438   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:30.501630   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:30.501749   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:30.501837   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:30.501904   72867 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:30.501996   72867 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:30.502032   72867 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:30.502085   72867 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:30.502093   72867 kubeadm.go:310] 
	I0906 20:09:30.502153   72867 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:30.502166   72867 kubeadm.go:310] 
	I0906 20:09:30.502242   72867 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:30.502257   72867 kubeadm.go:310] 
	I0906 20:09:30.502290   72867 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:30.502358   72867 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:30.502425   72867 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:30.502433   72867 kubeadm.go:310] 
	I0906 20:09:30.502486   72867 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:30.502494   72867 kubeadm.go:310] 
	I0906 20:09:30.502529   72867 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:30.502536   72867 kubeadm.go:310] 
	I0906 20:09:30.502575   72867 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:30.502633   72867 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:30.502706   72867 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:30.502720   72867 kubeadm.go:310] 
	I0906 20:09:30.502791   72867 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:30.502882   72867 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:30.502893   72867 kubeadm.go:310] 
	I0906 20:09:30.502982   72867 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503099   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:30.503120   72867 kubeadm.go:310] 	--control-plane 
	I0906 20:09:30.503125   72867 kubeadm.go:310] 
	I0906 20:09:30.503240   72867 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:30.503247   72867 kubeadm.go:310] 
	I0906 20:09:30.503312   72867 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503406   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:30.503416   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:09:30.503424   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:30.504880   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:30.505997   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:30.517864   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:30.539641   72867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:30.539731   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653828 minikube.k8s.io/updated_at=2024_09_06T20_09_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=default-k8s-diff-port-653828 minikube.k8s.io/primary=true
	I0906 20:09:30.539732   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.576812   72867 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:30.742163   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.242299   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.742502   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.192201   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.691488   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.242418   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:32.742424   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.242317   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.742587   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.242563   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.342481   72867 kubeadm.go:1113] duration metric: took 3.802829263s to wait for elevateKubeSystemPrivileges
	I0906 20:09:34.342520   72867 kubeadm.go:394] duration metric: took 5m1.826839653s to StartCluster
	I0906 20:09:34.342542   72867 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.342640   72867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:34.345048   72867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.345461   72867 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:34.345576   72867 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:34.345655   72867 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345691   72867 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653828"
	I0906 20:09:34.345696   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:34.345699   72867 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345712   72867 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345737   72867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653828"
	W0906 20:09:34.345703   72867 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:34.345752   72867 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.345762   72867 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:34.345779   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.345795   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.346102   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346136   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346174   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346195   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346231   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346201   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.347895   72867 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:34.349535   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:34.363021   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0906 20:09:34.363492   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.364037   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.364062   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.364463   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.365147   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.365186   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.365991   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
	I0906 20:09:34.366024   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0906 20:09:34.366472   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366512   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366953   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.366970   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367086   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.367113   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367494   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367642   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367988   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.368011   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.368282   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.375406   72867 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.375432   72867 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:34.375460   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.375825   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.375858   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.382554   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0906 20:09:34.383102   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.383600   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.383616   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.383938   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.384214   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.385829   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.387409   72867 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:34.388348   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:34.388366   72867 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:34.388381   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.392542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.392813   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.392828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.393018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.393068   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0906 20:09:34.393374   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.393439   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.393550   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.393686   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.394089   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.394116   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.394464   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.394651   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.396559   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.396712   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0906 20:09:34.397142   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.397646   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.397669   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.397929   72867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:34.398023   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.398468   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.398511   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.399007   72867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.399024   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:34.399043   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.405024   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405057   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.405081   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405287   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.405479   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.405634   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.405752   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.414779   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0906 20:09:34.415230   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.415662   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.415679   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.415993   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.416151   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.417818   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.418015   72867 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.418028   72867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:34.418045   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.421303   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421379   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.421399   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421645   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.421815   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.421979   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.422096   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.582923   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:34.600692   72867 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617429   72867 node_ready.go:49] node "default-k8s-diff-port-653828" has status "Ready":"True"
	I0906 20:09:34.617454   72867 node_ready.go:38] duration metric: took 16.723446ms for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617465   72867 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:34.632501   72867 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:34.679561   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.682999   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.746380   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:34.746406   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:34.876650   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:34.876680   72867 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:34.935388   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:34.935415   72867 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:35.092289   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:35.709257   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02965114s)
	I0906 20:09:35.709297   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026263795s)
	I0906 20:09:35.709352   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709373   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709319   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709398   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709810   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.709911   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709898   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709926   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.709954   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709962   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709876   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710029   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710047   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.710065   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.710226   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710238   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710636   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.710665   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710681   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754431   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.754458   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.754765   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.754781   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754821   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.181191   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:36.181219   72867 pod_ready.go:82] duration metric: took 1.54868366s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.181233   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.351617   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.259284594s)
	I0906 20:09:36.351684   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.351701   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.351992   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352078   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352100   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.352111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.352055   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352402   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352914   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352934   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352945   72867 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653828"
	I0906 20:09:36.354972   72867 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:36.356127   72867 addons.go:510] duration metric: took 2.010554769s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:34.695700   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:37.193366   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:38.187115   72867 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:39.188966   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:39.188998   72867 pod_ready.go:82] duration metric: took 3.007757042s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:39.189012   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:41.196228   72867 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.206614   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.206636   72867 pod_ready.go:82] duration metric: took 3.017616218s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.206647   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212140   72867 pod_ready.go:93] pod "kube-proxy-7846f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.212165   72867 pod_ready.go:82] duration metric: took 5.512697ms for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212174   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217505   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.217527   72867 pod_ready.go:82] duration metric: took 5.346748ms for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217534   72867 pod_ready.go:39] duration metric: took 7.600058293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:42.217549   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:42.217600   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:42.235961   72867 api_server.go:72] duration metric: took 7.890460166s to wait for apiserver process to appear ...
	I0906 20:09:42.235987   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:42.236003   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:09:42.240924   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:09:42.241889   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:42.241912   72867 api_server.go:131] duration metric: took 5.919055ms to wait for apiserver health ...
	I0906 20:09:42.241922   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:42.247793   72867 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:42.247825   72867 system_pods.go:61] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.247833   72867 system_pods.go:61] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.247839   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.247845   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.247852   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.247857   72867 system_pods.go:61] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.247861   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.247866   72867 system_pods.go:61] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.247873   72867 system_pods.go:61] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.247883   72867 system_pods.go:74] duration metric: took 5.95413ms to wait for pod list to return data ...
	I0906 20:09:42.247893   72867 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:42.251260   72867 default_sa.go:45] found service account: "default"
	I0906 20:09:42.251277   72867 default_sa.go:55] duration metric: took 3.3795ms for default service account to be created ...
	I0906 20:09:42.251284   72867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:42.256204   72867 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:42.256228   72867 system_pods.go:89] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.256233   72867 system_pods.go:89] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.256237   72867 system_pods.go:89] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.256241   72867 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.256245   72867 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.256249   72867 system_pods.go:89] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.256252   72867 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.256258   72867 system_pods.go:89] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.256261   72867 system_pods.go:89] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.256270   72867 system_pods.go:126] duration metric: took 4.981383ms to wait for k8s-apps to be running ...
	I0906 20:09:42.256278   72867 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:42.256323   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:42.272016   72867 system_svc.go:56] duration metric: took 15.727796ms WaitForService to wait for kubelet
	I0906 20:09:42.272050   72867 kubeadm.go:582] duration metric: took 7.926551396s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:42.272081   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:42.275486   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:42.275516   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:42.275527   72867 node_conditions.go:105] duration metric: took 3.439966ms to run NodePressure ...
	I0906 20:09:42.275540   72867 start.go:241] waiting for startup goroutines ...
	I0906 20:09:42.275548   72867 start.go:246] waiting for cluster config update ...
	I0906 20:09:42.275561   72867 start.go:255] writing updated cluster config ...
	I0906 20:09:42.275823   72867 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:42.326049   72867 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:42.328034   72867 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653828" cluster and "default" namespace by default
	I0906 20:09:39.692393   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.192176   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:44.691934   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:45.185317   72322 pod_ready.go:82] duration metric: took 4m0.000138495s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	E0906 20:09:45.185352   72322 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:09:45.185371   72322 pod_ready.go:39] duration metric: took 4m12.222584677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:45.185403   72322 kubeadm.go:597] duration metric: took 4m20.152442555s to restartPrimaryControlPlane
	W0906 20:09:45.185466   72322 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:45.185496   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:47.714239   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:09:47.714464   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:47.714711   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:09:52.715187   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:52.715391   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:02.716155   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:02.716424   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:11.446625   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261097398s)
	I0906 20:10:11.446717   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:11.472899   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:10:11.492643   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:10:11.509855   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:10:11.509878   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:10:11.509933   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:10:11.523039   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:10:11.523099   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:10:11.540484   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:10:11.560246   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:10:11.560323   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:10:11.585105   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.596067   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:10:11.596138   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.607049   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:10:11.616982   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:10:11.617058   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:10:11.627880   72322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:10:11.672079   72322 kubeadm.go:310] W0906 20:10:11.645236    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.672935   72322 kubeadm.go:310] W0906 20:10:11.646151    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.789722   72322 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:10:20.270339   72322 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:10:20.270450   72322 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:10:20.270551   72322 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:10:20.270697   72322 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:10:20.270837   72322 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:10:20.270932   72322 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:10:20.272324   72322 out.go:235]   - Generating certificates and keys ...
	I0906 20:10:20.272437   72322 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:10:20.272530   72322 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:10:20.272634   72322 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:10:20.272732   72322 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:10:20.272842   72322 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:10:20.272950   72322 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:10:20.273051   72322 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:10:20.273135   72322 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:10:20.273272   72322 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:10:20.273361   72322 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:10:20.273400   72322 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:10:20.273456   72322 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:10:20.273517   72322 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:10:20.273571   72322 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:10:20.273625   72322 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:10:20.273682   72322 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:10:20.273731   72322 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:10:20.273801   72322 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:10:20.273856   72322 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:10:20.275359   72322 out.go:235]   - Booting up control plane ...
	I0906 20:10:20.275466   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:10:20.275539   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:10:20.275595   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:10:20.275692   72322 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:10:20.275774   72322 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:10:20.275812   72322 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:10:20.275917   72322 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:10:20.276005   72322 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:10:20.276063   72322 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001365031s
	I0906 20:10:20.276127   72322 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:10:20.276189   72322 kubeadm.go:310] [api-check] The API server is healthy after 5.002810387s
	I0906 20:10:20.276275   72322 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:10:20.276410   72322 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:10:20.276480   72322 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:10:20.276639   72322 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-504385 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:10:20.276690   72322 kubeadm.go:310] [bootstrap-token] Using token: fv12w2.cc6vcthx5yn6r6ru
	I0906 20:10:20.277786   72322 out.go:235]   - Configuring RBAC rules ...
	I0906 20:10:20.277872   72322 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:10:20.277941   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:10:20.278082   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:10:20.278231   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:10:20.278351   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:10:20.278426   72322 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:10:20.278541   72322 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:10:20.278614   72322 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:10:20.278692   72322 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:10:20.278700   72322 kubeadm.go:310] 
	I0906 20:10:20.278780   72322 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:10:20.278790   72322 kubeadm.go:310] 
	I0906 20:10:20.278880   72322 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:10:20.278889   72322 kubeadm.go:310] 
	I0906 20:10:20.278932   72322 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:10:20.279023   72322 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:10:20.279079   72322 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:10:20.279086   72322 kubeadm.go:310] 
	I0906 20:10:20.279141   72322 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:10:20.279148   72322 kubeadm.go:310] 
	I0906 20:10:20.279186   72322 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:10:20.279195   72322 kubeadm.go:310] 
	I0906 20:10:20.279291   72322 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:10:20.279420   72322 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:10:20.279524   72322 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:10:20.279535   72322 kubeadm.go:310] 
	I0906 20:10:20.279647   72322 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:10:20.279756   72322 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:10:20.279767   72322 kubeadm.go:310] 
	I0906 20:10:20.279896   72322 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280043   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:10:20.280080   72322 kubeadm.go:310] 	--control-plane 
	I0906 20:10:20.280090   72322 kubeadm.go:310] 
	I0906 20:10:20.280230   72322 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:10:20.280258   72322 kubeadm.go:310] 
	I0906 20:10:20.280365   72322 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280514   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:10:20.280532   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:10:20.280541   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:10:20.282066   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:10:20.283228   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:10:20.294745   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:10:20.317015   72322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-504385 minikube.k8s.io/updated_at=2024_09_06T20_10_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=no-preload-504385 minikube.k8s.io/primary=true
	I0906 20:10:20.528654   72322 ops.go:34] apiserver oom_adj: -16
	I0906 20:10:20.528681   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.029394   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.528922   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.029667   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.528814   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.029163   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.529709   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.029277   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.529466   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.668636   72322 kubeadm.go:1113] duration metric: took 4.351557657s to wait for elevateKubeSystemPrivileges
	I0906 20:10:24.668669   72322 kubeadm.go:394] duration metric: took 4m59.692142044s to StartCluster
	I0906 20:10:24.668690   72322 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.668775   72322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:10:24.670483   72322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.670765   72322 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:10:24.670874   72322 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:10:24.670975   72322 addons.go:69] Setting storage-provisioner=true in profile "no-preload-504385"
	I0906 20:10:24.670990   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:10:24.671015   72322 addons.go:234] Setting addon storage-provisioner=true in "no-preload-504385"
	W0906 20:10:24.671027   72322 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:10:24.670988   72322 addons.go:69] Setting default-storageclass=true in profile "no-preload-504385"
	I0906 20:10:24.671020   72322 addons.go:69] Setting metrics-server=true in profile "no-preload-504385"
	I0906 20:10:24.671053   72322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-504385"
	I0906 20:10:24.671069   72322 addons.go:234] Setting addon metrics-server=true in "no-preload-504385"
	I0906 20:10:24.671057   72322 host.go:66] Checking if "no-preload-504385" exists ...
	W0906 20:10:24.671080   72322 addons.go:243] addon metrics-server should already be in state true
	I0906 20:10:24.671112   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.671387   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671413   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671433   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671462   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671476   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671509   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.672599   72322 out.go:177] * Verifying Kubernetes components...
	I0906 20:10:24.674189   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:10:24.688494   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0906 20:10:24.689082   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.689564   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.689586   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.690020   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.690242   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.691753   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0906 20:10:24.691758   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0906 20:10:24.692223   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692314   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692744   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692761   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.692892   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692912   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.693162   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693498   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693821   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.693851   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694035   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694067   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694118   72322 addons.go:234] Setting addon default-storageclass=true in "no-preload-504385"
	W0906 20:10:24.694133   72322 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:10:24.694159   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.694503   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694533   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.710695   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0906 20:10:24.712123   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.712820   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.712844   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.713265   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.713488   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.714238   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0906 20:10:24.714448   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0906 20:10:24.714584   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.714801   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.715454   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715472   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715517   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.715631   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715643   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715961   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716468   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716527   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.717120   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.717170   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.717534   72322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:10:24.718838   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.719392   72322 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:24.719413   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:10:24.719435   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.720748   72322 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:10:22.717567   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:22.717827   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:24.722045   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:10:24.722066   72322 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:10:24.722084   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.722722   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723383   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.723408   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723545   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.723788   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.723970   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.724133   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.725538   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.725987   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.726006   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.726137   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.726317   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.726499   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.726629   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.734236   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0906 20:10:24.734597   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.735057   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.735069   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.735479   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.735612   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.737446   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.737630   72322 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:24.737647   72322 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:10:24.737658   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.740629   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741040   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.741063   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741251   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.741418   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.741530   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.741659   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.903190   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:10:24.944044   72322 node_ready.go:35] waiting up to 6m0s for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960395   72322 node_ready.go:49] node "no-preload-504385" has status "Ready":"True"
	I0906 20:10:24.960436   72322 node_ready.go:38] duration metric: took 16.357022ms for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960453   72322 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:24.981153   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:25.103072   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:25.113814   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:10:25.113843   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:10:25.123206   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:25.209178   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:10:25.209208   72322 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:10:25.255577   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.255604   72322 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:10:25.297179   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.336592   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336615   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.336915   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.336930   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.336938   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336945   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.337164   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.337178   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.350330   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.350356   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.350630   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.350648   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850349   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850377   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850688   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.850707   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850717   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850725   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850974   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.851012   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.033886   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.033918   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034215   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034221   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034241   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034250   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.034258   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034525   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034533   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034579   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034593   72322 addons.go:475] Verifying addon metrics-server=true in "no-preload-504385"
	I0906 20:10:26.036358   72322 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0906 20:10:26.037927   72322 addons.go:510] duration metric: took 1.367055829s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0906 20:10:26.989945   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:28.987386   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:28.987407   72322 pod_ready.go:82] duration metric: took 4.006228588s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:28.987419   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:30.994020   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:32.999308   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:32.999332   72322 pod_ready.go:82] duration metric: took 4.01190401s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:32.999344   72322 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005872   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.005898   72322 pod_ready.go:82] duration metric: took 1.006546878s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005908   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010279   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.010306   72322 pod_ready.go:82] duration metric: took 4.391154ms for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010315   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014331   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.014346   72322 pod_ready.go:82] duration metric: took 4.025331ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014354   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018361   72322 pod_ready.go:93] pod "kube-proxy-48s2x" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.018378   72322 pod_ready.go:82] duration metric: took 4.018525ms for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018386   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191606   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.191630   72322 pod_ready.go:82] duration metric: took 173.23777ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191638   72322 pod_ready.go:39] duration metric: took 9.231173272s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:34.191652   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:10:34.191738   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:10:34.207858   72322 api_server.go:72] duration metric: took 9.537052258s to wait for apiserver process to appear ...
	I0906 20:10:34.207883   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:10:34.207904   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:10:34.214477   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:10:34.216178   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:10:34.216211   72322 api_server.go:131] duration metric: took 8.319856ms to wait for apiserver health ...
	I0906 20:10:34.216221   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:10:34.396409   72322 system_pods.go:59] 9 kube-system pods found
	I0906 20:10:34.396443   72322 system_pods.go:61] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.396451   72322 system_pods.go:61] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.396456   72322 system_pods.go:61] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.396461   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.396468   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.396472   72322 system_pods.go:61] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.396477   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.396487   72322 system_pods.go:61] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.396502   72322 system_pods.go:61] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.396514   72322 system_pods.go:74] duration metric: took 180.284785ms to wait for pod list to return data ...
	I0906 20:10:34.396526   72322 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:10:34.592160   72322 default_sa.go:45] found service account: "default"
	I0906 20:10:34.592186   72322 default_sa.go:55] duration metric: took 195.651674ms for default service account to be created ...
	I0906 20:10:34.592197   72322 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:10:34.795179   72322 system_pods.go:86] 9 kube-system pods found
	I0906 20:10:34.795210   72322 system_pods.go:89] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.795217   72322 system_pods.go:89] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.795221   72322 system_pods.go:89] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.795224   72322 system_pods.go:89] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.795228   72322 system_pods.go:89] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.795232   72322 system_pods.go:89] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.795238   72322 system_pods.go:89] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.795244   72322 system_pods.go:89] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.795249   72322 system_pods.go:89] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.795258   72322 system_pods.go:126] duration metric: took 203.05524ms to wait for k8s-apps to be running ...
	I0906 20:10:34.795270   72322 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:10:34.795328   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:34.810406   72322 system_svc.go:56] duration metric: took 15.127486ms WaitForService to wait for kubelet
	I0906 20:10:34.810437   72322 kubeadm.go:582] duration metric: took 10.13963577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:10:34.810461   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:10:34.993045   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:10:34.993077   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:10:34.993092   72322 node_conditions.go:105] duration metric: took 182.626456ms to run NodePressure ...
	I0906 20:10:34.993105   72322 start.go:241] waiting for startup goroutines ...
	I0906 20:10:34.993112   72322 start.go:246] waiting for cluster config update ...
	I0906 20:10:34.993122   72322 start.go:255] writing updated cluster config ...
	I0906 20:10:34.993401   72322 ssh_runner.go:195] Run: rm -f paused
	I0906 20:10:35.043039   72322 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:10:35.045782   72322 out.go:177] * Done! kubectl is now configured to use "no-preload-504385" cluster and "default" namespace by default
	I0906 20:11:02.719781   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:02.720062   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:02.720077   73230 kubeadm.go:310] 
	I0906 20:11:02.720125   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:11:02.720177   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:11:02.720189   73230 kubeadm.go:310] 
	I0906 20:11:02.720246   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:11:02.720290   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:11:02.720443   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:11:02.720469   73230 kubeadm.go:310] 
	I0906 20:11:02.720593   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:11:02.720665   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:11:02.720722   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:11:02.720746   73230 kubeadm.go:310] 
	I0906 20:11:02.720900   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:11:02.721018   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:11:02.721028   73230 kubeadm.go:310] 
	I0906 20:11:02.721180   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:11:02.721311   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:11:02.721405   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:11:02.721500   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:11:02.721512   73230 kubeadm.go:310] 
	I0906 20:11:02.722088   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:11:02.722199   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:11:02.722310   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0906 20:11:02.722419   73230 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 20:11:02.722469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:11:03.188091   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:11:03.204943   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:11:03.215434   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:11:03.215458   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:11:03.215506   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:11:03.225650   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:11:03.225713   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:11:03.236252   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:11:03.245425   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:11:03.245489   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:11:03.255564   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.264932   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:11:03.265014   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.274896   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:11:03.284027   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:11:03.284092   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:11:03.294368   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:11:03.377411   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:11:03.377509   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:11:03.537331   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:11:03.537590   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:11:03.537722   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:11:03.728458   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:11:03.730508   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:11:03.730621   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:11:03.730720   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:11:03.730869   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:11:03.730984   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:11:03.731082   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:11:03.731167   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:11:03.731258   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:11:03.731555   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:11:03.731896   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:11:03.732663   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:11:03.732953   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:11:03.733053   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:11:03.839927   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:11:03.988848   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:11:04.077497   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:11:04.213789   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:11:04.236317   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:11:04.237625   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:11:04.237719   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:11:04.399036   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:11:04.400624   73230 out.go:235]   - Booting up control plane ...
	I0906 20:11:04.400709   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:11:04.401417   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:11:04.402751   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:11:04.404122   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:11:04.407817   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:11:44.410273   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:11:44.410884   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:44.411132   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:49.411428   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:49.411674   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:59.412917   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:59.413182   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:19.414487   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:19.414692   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415457   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:59.415729   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415750   73230 kubeadm.go:310] 
	I0906 20:12:59.415808   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:12:59.415864   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:12:59.415874   73230 kubeadm.go:310] 
	I0906 20:12:59.415933   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:12:59.415979   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:12:59.416147   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:12:59.416167   73230 kubeadm.go:310] 
	I0906 20:12:59.416332   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:12:59.416372   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:12:59.416420   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:12:59.416428   73230 kubeadm.go:310] 
	I0906 20:12:59.416542   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:12:59.416650   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:12:59.416659   73230 kubeadm.go:310] 
	I0906 20:12:59.416818   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:12:59.416928   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:12:59.417030   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:12:59.417139   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:12:59.417153   73230 kubeadm.go:310] 
	I0906 20:12:59.417400   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:12:59.417485   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:12:59.417559   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 20:12:59.417626   73230 kubeadm.go:394] duration metric: took 8m3.018298427s to StartCluster
	I0906 20:12:59.417673   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:12:59.417741   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:12:59.464005   73230 cri.go:89] found id: ""
	I0906 20:12:59.464033   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.464040   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:12:59.464045   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:12:59.464101   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:12:59.504218   73230 cri.go:89] found id: ""
	I0906 20:12:59.504252   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.504264   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:12:59.504271   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:12:59.504327   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:12:59.541552   73230 cri.go:89] found id: ""
	I0906 20:12:59.541579   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.541589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:12:59.541596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:12:59.541663   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:12:59.580135   73230 cri.go:89] found id: ""
	I0906 20:12:59.580158   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.580168   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:12:59.580174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:12:59.580220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:12:59.622453   73230 cri.go:89] found id: ""
	I0906 20:12:59.622486   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.622498   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:12:59.622518   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:12:59.622587   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:12:59.661561   73230 cri.go:89] found id: ""
	I0906 20:12:59.661590   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.661601   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:12:59.661608   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:12:59.661668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:12:59.695703   73230 cri.go:89] found id: ""
	I0906 20:12:59.695732   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.695742   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:12:59.695749   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:12:59.695808   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:12:59.739701   73230 cri.go:89] found id: ""
	I0906 20:12:59.739733   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.739744   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:12:59.739756   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:12:59.739771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:12:59.791400   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:12:59.791428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:12:59.851142   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:12:59.851179   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:12:59.867242   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:12:59.867278   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:12:59.941041   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:12:59.941060   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:12:59.941071   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0906 20:13:00.061377   73230 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 20:13:00.061456   73230 out.go:270] * 
	W0906 20:13:00.061515   73230 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.061532   73230 out.go:270] * 
	W0906 20:13:00.062343   73230 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:13:00.065723   73230 out.go:201] 
	W0906 20:13:00.066968   73230 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.067028   73230 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 20:13:00.067059   73230 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 20:13:00.068497   73230 out.go:201] 
	
	
	==> CRI-O <==
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.305184455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653924305161257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0229fa31-9cde-472c-bd5c-52a8120cf56f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.305913220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a282fd08-bcb9-462a-badb-e6dfe14f8a3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.305980522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a282fd08-bcb9-462a-badb-e6dfe14f8a3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.306175448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d,PodSandboxId:61671a3f844efbab17ecdebfec8cd4a97449ed3a0dcb74521c17faf3d68ad00c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653376406262904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2a4afa2-1018-41f6-aecf-1b6300f520a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24,PodSandboxId:4c0e3cf407781899a0b3ee235bceab29ae10c540b4ff14e385534ff8049fa367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653376009255546,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-v4r9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84854d53-cb74-42c8-bb74-92536fcd300d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f,PodSandboxId:3fa4ea69acc96abe485531b92ae9c4f859fa06c660a935f0b0d1300713c5685e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653375920428793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h9hv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: bf6ec352-3abf-4738-8f19-8a70916e98a9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0,PodSandboxId:89c54fb230094f33fc5afb9e0a82e09f39e5a9590c27496d264a8e99d4e8d90a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1725653375260910062,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e0658b-592e-4d52-b431-f1227e742e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e,PodSandboxId:e0ed1ad3b6b6b30fe7aec4853cbfbb12acd9ed0f1f11ac8b9c47671fd776786a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653364321397039
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e042d563b1c2c161c2ba7b23067597,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625,PodSandboxId:570593df6df4be795a2deb2fbf510e950ca14f1af60062a868d44526ddc26040,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653364265291946,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0768cc3e6c91c8a2be732353a197244b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08,PodSandboxId:f7d73a66b27785f50e1a9d465aaf568d888b5d64527eb419e9bade59ceca6777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653364238419268,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b,PodSandboxId:e2e3004b3fce6d199df1d0ea32a3e939728166d2d354875630dddb7e7ac30e92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653364201830039,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3333647b9fedcba3932ef7cb0607608,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733,PodSandboxId:28592630c813981f553f072c644797adfab13f879ae03621750edd21de770422,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653075044979637,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a282fd08-bcb9-462a-badb-e6dfe14f8a3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.341385580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da1bcdb7-4955-4b4b-a12a-37206fac0fe2 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.341472182Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da1bcdb7-4955-4b4b-a12a-37206fac0fe2 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.343032132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1823c698-f29d-4bad-afdc-f5e26126123e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.343461458Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653924343439034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1823c698-f29d-4bad-afdc-f5e26126123e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.344142275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd84082f-f02a-4e89-afeb-4478a6bd4178 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.344196729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd84082f-f02a-4e89-afeb-4478a6bd4178 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.344384672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d,PodSandboxId:61671a3f844efbab17ecdebfec8cd4a97449ed3a0dcb74521c17faf3d68ad00c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653376406262904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2a4afa2-1018-41f6-aecf-1b6300f520a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24,PodSandboxId:4c0e3cf407781899a0b3ee235bceab29ae10c540b4ff14e385534ff8049fa367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653376009255546,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-v4r9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84854d53-cb74-42c8-bb74-92536fcd300d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f,PodSandboxId:3fa4ea69acc96abe485531b92ae9c4f859fa06c660a935f0b0d1300713c5685e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653375920428793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h9hv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: bf6ec352-3abf-4738-8f19-8a70916e98a9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0,PodSandboxId:89c54fb230094f33fc5afb9e0a82e09f39e5a9590c27496d264a8e99d4e8d90a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1725653375260910062,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e0658b-592e-4d52-b431-f1227e742e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e,PodSandboxId:e0ed1ad3b6b6b30fe7aec4853cbfbb12acd9ed0f1f11ac8b9c47671fd776786a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653364321397039
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e042d563b1c2c161c2ba7b23067597,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625,PodSandboxId:570593df6df4be795a2deb2fbf510e950ca14f1af60062a868d44526ddc26040,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653364265291946,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0768cc3e6c91c8a2be732353a197244b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08,PodSandboxId:f7d73a66b27785f50e1a9d465aaf568d888b5d64527eb419e9bade59ceca6777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653364238419268,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b,PodSandboxId:e2e3004b3fce6d199df1d0ea32a3e939728166d2d354875630dddb7e7ac30e92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653364201830039,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3333647b9fedcba3932ef7cb0607608,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733,PodSandboxId:28592630c813981f553f072c644797adfab13f879ae03621750edd21de770422,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653075044979637,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd84082f-f02a-4e89-afeb-4478a6bd4178 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.384358335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d28a38d1-a585-432b-b2fe-e7de1dca334a name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.384430401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d28a38d1-a585-432b-b2fe-e7de1dca334a name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.385590357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9cba4592-35d7-40c6-9d95-6293d517b0e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.386194108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653924386171191,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cba4592-35d7-40c6-9d95-6293d517b0e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.386814782Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85bcfd3d-bf83-44ad-8adc-65349df4787e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.386868611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85bcfd3d-bf83-44ad-8adc-65349df4787e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.387247397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d,PodSandboxId:61671a3f844efbab17ecdebfec8cd4a97449ed3a0dcb74521c17faf3d68ad00c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653376406262904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2a4afa2-1018-41f6-aecf-1b6300f520a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24,PodSandboxId:4c0e3cf407781899a0b3ee235bceab29ae10c540b4ff14e385534ff8049fa367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653376009255546,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-v4r9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84854d53-cb74-42c8-bb74-92536fcd300d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f,PodSandboxId:3fa4ea69acc96abe485531b92ae9c4f859fa06c660a935f0b0d1300713c5685e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653375920428793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h9hv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: bf6ec352-3abf-4738-8f19-8a70916e98a9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0,PodSandboxId:89c54fb230094f33fc5afb9e0a82e09f39e5a9590c27496d264a8e99d4e8d90a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1725653375260910062,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e0658b-592e-4d52-b431-f1227e742e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e,PodSandboxId:e0ed1ad3b6b6b30fe7aec4853cbfbb12acd9ed0f1f11ac8b9c47671fd776786a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653364321397039
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e042d563b1c2c161c2ba7b23067597,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625,PodSandboxId:570593df6df4be795a2deb2fbf510e950ca14f1af60062a868d44526ddc26040,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653364265291946,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0768cc3e6c91c8a2be732353a197244b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08,PodSandboxId:f7d73a66b27785f50e1a9d465aaf568d888b5d64527eb419e9bade59ceca6777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653364238419268,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b,PodSandboxId:e2e3004b3fce6d199df1d0ea32a3e939728166d2d354875630dddb7e7ac30e92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653364201830039,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3333647b9fedcba3932ef7cb0607608,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733,PodSandboxId:28592630c813981f553f072c644797adfab13f879ae03621750edd21de770422,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653075044979637,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85bcfd3d-bf83-44ad-8adc-65349df4787e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.422371300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=540c9218-6f61-40f3-badf-33ccb74c5df9 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.422442920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=540c9218-6f61-40f3-badf-33ccb74c5df9 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.423513383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e9c6449-1410-4ff0-83c3-05e7333ad90f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.423994044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653924423969074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e9c6449-1410-4ff0-83c3-05e7333ad90f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.424466828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5895752-0c5e-44bc-a3c0-961292b99805 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.424523847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5895752-0c5e-44bc-a3c0-961292b99805 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:18:44 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:18:44.424744954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d,PodSandboxId:61671a3f844efbab17ecdebfec8cd4a97449ed3a0dcb74521c17faf3d68ad00c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653376406262904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2a4afa2-1018-41f6-aecf-1b6300f520a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24,PodSandboxId:4c0e3cf407781899a0b3ee235bceab29ae10c540b4ff14e385534ff8049fa367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653376009255546,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-v4r9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84854d53-cb74-42c8-bb74-92536fcd300d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f,PodSandboxId:3fa4ea69acc96abe485531b92ae9c4f859fa06c660a935f0b0d1300713c5685e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653375920428793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h9hv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: bf6ec352-3abf-4738-8f19-8a70916e98a9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0,PodSandboxId:89c54fb230094f33fc5afb9e0a82e09f39e5a9590c27496d264a8e99d4e8d90a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1725653375260910062,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e0658b-592e-4d52-b431-f1227e742e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e,PodSandboxId:e0ed1ad3b6b6b30fe7aec4853cbfbb12acd9ed0f1f11ac8b9c47671fd776786a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653364321397039
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e042d563b1c2c161c2ba7b23067597,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625,PodSandboxId:570593df6df4be795a2deb2fbf510e950ca14f1af60062a868d44526ddc26040,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653364265291946,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0768cc3e6c91c8a2be732353a197244b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08,PodSandboxId:f7d73a66b27785f50e1a9d465aaf568d888b5d64527eb419e9bade59ceca6777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653364238419268,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b,PodSandboxId:e2e3004b3fce6d199df1d0ea32a3e939728166d2d354875630dddb7e7ac30e92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653364201830039,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3333647b9fedcba3932ef7cb0607608,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733,PodSandboxId:28592630c813981f553f072c644797adfab13f879ae03621750edd21de770422,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653075044979637,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5895752-0c5e-44bc-a3c0-961292b99805 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6872d43d4bac5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   61671a3f844ef       storage-provisioner
	92f5b8c5f6328       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   4c0e3cf407781       coredns-6f6b679f8f-v4r9m
	de3c8cdb6a45a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3fa4ea69acc96       coredns-6f6b679f8f-h9hv9
	fa883ab3c2a42       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   89c54fb230094       kube-proxy-7846f
	f96410431602d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   e0ed1ad3b6b6b       kube-scheduler-default-k8s-diff-port-653828
	ea0e2891a9f3d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   570593df6df4b       etcd-default-k8s-diff-port-653828
	a7100d8ec8ed1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   f7d73a66b2778       kube-apiserver-default-k8s-diff-port-653828
	c0fe32967e411       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   e2e3004b3fce6       kube-controller-manager-default-k8s-diff-port-653828
	2d3779deb7d72       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   28592630c8139       kube-apiserver-default-k8s-diff-port-653828
	
	
	==> coredns [92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-653828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-653828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=default-k8s-diff-port-653828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T20_09_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 20:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-653828
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 20:18:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 20:14:45 +0000   Fri, 06 Sep 2024 20:09:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 20:14:45 +0000   Fri, 06 Sep 2024 20:09:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 20:14:45 +0000   Fri, 06 Sep 2024 20:09:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 20:14:45 +0000   Fri, 06 Sep 2024 20:09:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.16
	  Hostname:    default-k8s-diff-port-653828
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ecf3f0842600481aa4cf97145c6b8004
	  System UUID:                ecf3f084-2600-481a-a4cf-97145c6b8004
	  Boot ID:                    7c4b00cb-e45a-48b2-8d6e-bc259b9684bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-h9hv9                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-6f6b679f8f-v4r9m                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-default-k8s-diff-port-653828                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-default-k8s-diff-port-653828             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-653828    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-7846f                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-default-k8s-diff-port-653828             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-nwk7f                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m8s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s  kubelet          Node default-k8s-diff-port-653828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s  kubelet          Node default-k8s-diff-port-653828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s  kubelet          Node default-k8s-diff-port-653828 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s  node-controller  Node default-k8s-diff-port-653828 event: Registered Node default-k8s-diff-port-653828 in Controller
	
	
	==> dmesg <==
	[  +0.054159] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040179] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.907754] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.569305] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.631495] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.514601] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.061919] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063415] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.200764] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.117346] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.280152] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.240064] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +1.979589] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.066666] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.533168] kauditd_printk_skb: 69 callbacks suppressed
	[  +9.293318] kauditd_printk_skb: 90 callbacks suppressed
	[Sep 6 20:09] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.616644] systemd-fstab-generator[2549]: Ignoring "noauto" option for root device
	[  +4.495490] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.891108] systemd-fstab-generator[2875]: Ignoring "noauto" option for root device
	[  +4.913870] systemd-fstab-generator[2985]: Ignoring "noauto" option for root device
	[  +0.100765] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.010924] kauditd_printk_skb: 87 callbacks suppressed
	
	
	==> etcd [ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625] <==
	{"level":"info","ts":"2024-09-06T20:09:24.655075Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T20:09:24.663630Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"72247325455803ad","initial-advertise-peer-urls":["https://192.168.50.16:2380"],"listen-peer-urls":["https://192.168.50.16:2380"],"advertise-client-urls":["https://192.168.50.16:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.16:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T20:09:24.666010Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T20:09:24.668120Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.16:2380"}
	{"level":"info","ts":"2024-09-06T20:09:24.668164Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.16:2380"}
	{"level":"info","ts":"2024-09-06T20:09:24.887846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72247325455803ad is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-06T20:09:24.887993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72247325455803ad became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-06T20:09:24.888052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72247325455803ad received MsgPreVoteResp from 72247325455803ad at term 1"}
	{"level":"info","ts":"2024-09-06T20:09:24.888085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72247325455803ad became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:24.888109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72247325455803ad received MsgVoteResp from 72247325455803ad at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:24.888136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72247325455803ad became leader at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:24.888161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72247325455803ad elected leader 72247325455803ad at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:24.893024Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:24.895156Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"72247325455803ad","local-member-attributes":"{Name:default-k8s-diff-port-653828 ClientURLs:[https://192.168.50.16:2379]}","request-path":"/0/members/72247325455803ad/attributes","cluster-id":"ca7d65c2cc2a573","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T20:09:24.897829Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ca7d65c2cc2a573","local-member-id":"72247325455803ad","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:24.897939Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:24.897979Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:24.898024Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:09:24.898320Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:09:24.899948Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:09:24.903673Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T20:09:24.906401Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:09:24.907481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.16:2379"}
	{"level":"info","ts":"2024-09-06T20:09:24.908022Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T20:09:24.908094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:18:44 up 14 min,  0 users,  load average: 0.09, 0.12, 0.13
	Linux default-k8s-diff-port-653828 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733] <==
	W0906 20:09:15.602579       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.605099       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.794620       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.808731       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.852556       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.890161       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.897072       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:16.054721       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.309688       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.373458       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.436881       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.645275       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.765259       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.016108       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.021995       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.130455       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.262138       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.271259       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.379197       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.401122       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.421176       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.422509       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.452508       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.663900       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.715709       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0906 20:14:27.849012       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:14:27.849112       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0906 20:14:27.850045       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:14:27.850691       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:15:27.850961       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:15:27.851113       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0906 20:15:27.850969       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:15:27.851181       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0906 20:15:27.852486       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:15:27.852532       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:17:27.853088       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:17:27.853187       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0906 20:17:27.853083       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:17:27.853288       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:17:27.854584       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:17:27.854694       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b] <==
	E0906 20:13:33.728124       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:13:34.283030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:14:03.734898       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:14:04.290830       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:14:33.741742       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:14:34.299090       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:14:45.823046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-653828"
	E0906 20:15:03.749008       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:15:04.307593       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:15:21.839124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="378.164µs"
	E0906 20:15:33.756284       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:15:34.315529       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:15:36.838551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="144.541µs"
	E0906 20:16:03.762609       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:16:04.323591       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:16:33.769987       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:16:34.332030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:17:03.778053       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:17:04.341332       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:17:33.785039       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:17:34.350263       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:18:03.791434       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:18:04.357648       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:18:33.797277       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:18:34.366305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 20:09:35.932980       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 20:09:36.061830       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.16"]
	E0906 20:09:36.062207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 20:09:36.462736       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 20:09:36.463336       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 20:09:36.463373       1 server_linux.go:169] "Using iptables Proxier"
	I0906 20:09:36.473938       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 20:09:36.474220       1 server.go:483] "Version info" version="v1.31.0"
	I0906 20:09:36.474234       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:09:36.476661       1 config.go:197] "Starting service config controller"
	I0906 20:09:36.476677       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 20:09:36.476695       1 config.go:104] "Starting endpoint slice config controller"
	I0906 20:09:36.476698       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 20:09:36.476725       1 config.go:326] "Starting node config controller"
	I0906 20:09:36.476729       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 20:09:36.578210       1 shared_informer.go:320] Caches are synced for node config
	I0906 20:09:36.578817       1 shared_informer.go:320] Caches are synced for service config
	I0906 20:09:36.578831       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e] <==
	W0906 20:09:26.891960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:09:26.892059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.710825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 20:09:27.710876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.841876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:09:27.841940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.842992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:27.843035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.918133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 20:09:27.918205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.956695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:27.956747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.968052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 20:09:27.968110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.971360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 20:09:27.971407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.979998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 20:09:27.980095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:28.018346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:28.018832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:28.167086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 20:09:28.167146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:28.214024       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 20:09:28.214217       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 20:09:30.787543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 20:17:29 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:17:29.993790    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653849993165316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:38 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:17:38.824096    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	Sep 06 20:17:39 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:17:39.995114    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653859994825477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:39 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:17:39.995148    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653859994825477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:49 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:17:49.996424    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653869996064681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:49 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:17:49.996688    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653869996064681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:50 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:17:50.822440    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	Sep 06 20:17:59 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:17:59.998672    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653879998337592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:17:59 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:17:59.999195    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653879998337592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:04 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:04.823086    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	Sep 06 20:18:10 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:10.000634    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653890000306248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:10 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:10.001104    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653890000306248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:17 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:17.822459    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	Sep 06 20:18:20 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:20.003725    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653900003158941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:20 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:20.004046    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653900003158941,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:29 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:29.842929    2882 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 20:18:29 default-k8s-diff-port-653828 kubelet[2882]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 20:18:29 default-k8s-diff-port-653828 kubelet[2882]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 20:18:29 default-k8s-diff-port-653828 kubelet[2882]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 20:18:29 default-k8s-diff-port-653828 kubelet[2882]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 20:18:30 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:30.005552    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653910005185502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:30 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:30.005579    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653910005185502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:30 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:30.823198    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	Sep 06 20:18:40 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:40.007936    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653920007254952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:40 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:18:40.007976    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653920007254952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d] <==
	I0906 20:09:36.567056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 20:09:36.586146       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 20:09:36.586298       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 20:09:36.615073       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 20:09:36.615552       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653828_3cd0d618-c03c-4aec-a5cc-4b988c4af110!
	I0906 20:09:36.626868       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"573f1391-b9fd-4ded-9a19-90e70383b09a", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-653828_3cd0d618-c03c-4aec-a5cc-4b988c4af110 became leader
	I0906 20:09:36.718215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653828_3cd0d618-c03c-4aec-a5cc-4b988c4af110!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-653828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-nwk7f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-653828 describe pod metrics-server-6867b74b74-nwk7f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-653828 describe pod metrics-server-6867b74b74-nwk7f: exit status 1 (61.201289ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-nwk7f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-653828 describe pod metrics-server-6867b74b74-nwk7f: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0906 20:10:50.866606   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:11:44.178324   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:11:57.122540   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:12:53.377527   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-504385 -n no-preload-504385
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-06 20:19:35.553833896 +0000 UTC m=+6622.043587427
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504385 -n no-preload-504385
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-504385 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-504385 logs -n 25: (2.146986091s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-603826 sudo cat                              | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo find                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo crio                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-603826                                       | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-859361 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | disable-driver-mounts-859361                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:57 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-504385             | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-458066            | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653828  | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC | 06 Sep 24 19:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC |                     |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-504385                  | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-458066                 | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-843298        | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653828       | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-843298             | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 20:00:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:00:55.455816   73230 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:00:55.455933   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.455943   73230 out.go:358] Setting ErrFile to fd 2...
	I0906 20:00:55.455951   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.456141   73230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:00:55.456685   73230 out.go:352] Setting JSON to false
	I0906 20:00:55.457698   73230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6204,"bootTime":1725646651,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:00:55.457762   73230 start.go:139] virtualization: kvm guest
	I0906 20:00:55.459863   73230 out.go:177] * [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:00:55.461119   73230 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:00:55.461167   73230 notify.go:220] Checking for updates...
	I0906 20:00:55.463398   73230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:00:55.464573   73230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:00:55.465566   73230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:00:55.466605   73230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:00:55.467834   73230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:00:55.469512   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:00:55.470129   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.470183   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.484881   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0906 20:00:55.485238   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.485752   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.485776   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.486108   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.486296   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.488175   73230 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 20:00:55.489359   73230 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:00:55.489671   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.489705   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.504589   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0906 20:00:55.505047   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.505557   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.505581   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.505867   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.506018   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.541116   73230 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 20:00:55.542402   73230 start.go:297] selected driver: kvm2
	I0906 20:00:55.542423   73230 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-8
43298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.542548   73230 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:00:55.543192   73230 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.543257   73230 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:00:55.558465   73230 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:00:55.558833   73230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:00:55.558865   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:00:55.558875   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:00:55.558908   73230 start.go:340] cluster config:
	{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.559011   73230 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.561521   73230 out.go:177] * Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	I0906 20:00:55.309027   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:58.377096   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:55.562714   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:00:55.562760   73230 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:00:55.562773   73230 cache.go:56] Caching tarball of preloaded images
	I0906 20:00:55.562856   73230 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:00:55.562868   73230 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0906 20:00:55.562977   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:00:55.563173   73230 start.go:360] acquireMachinesLock for old-k8s-version-843298: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:01:04.457122   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:07.529093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:13.609120   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:16.681107   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:22.761164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:25.833123   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:31.913167   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:34.985108   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:41.065140   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:44.137176   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:50.217162   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:53.289137   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:59.369093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:02.441171   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:08.521164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:11.593164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:17.673124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:20.745159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:26.825154   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:29.897211   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:35.977181   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:39.049161   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:45.129172   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:48.201208   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:54.281103   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:57.353175   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:03.433105   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:06.505124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:12.585121   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:15.657169   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:21.737151   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:24.809135   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:30.889180   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:33.961145   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:40.041159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:43.113084   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:46.117237   72441 start.go:364] duration metric: took 4m28.485189545s to acquireMachinesLock for "embed-certs-458066"
	I0906 20:03:46.117298   72441 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:03:46.117309   72441 fix.go:54] fixHost starting: 
	I0906 20:03:46.117737   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:03:46.117773   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:03:46.132573   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0906 20:03:46.133029   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:03:46.133712   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:03:46.133743   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:03:46.134097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:03:46.134322   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:03:46.134505   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:03:46.136291   72441 fix.go:112] recreateIfNeeded on embed-certs-458066: state=Stopped err=<nil>
	I0906 20:03:46.136313   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	W0906 20:03:46.136466   72441 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:03:46.138544   72441 out.go:177] * Restarting existing kvm2 VM for "embed-certs-458066" ...
	I0906 20:03:46.139833   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Start
	I0906 20:03:46.140001   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring networks are active...
	I0906 20:03:46.140754   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network default is active
	I0906 20:03:46.141087   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network mk-embed-certs-458066 is active
	I0906 20:03:46.141402   72441 main.go:141] libmachine: (embed-certs-458066) Getting domain xml...
	I0906 20:03:46.142202   72441 main.go:141] libmachine: (embed-certs-458066) Creating domain...
	I0906 20:03:47.351460   72441 main.go:141] libmachine: (embed-certs-458066) Waiting to get IP...
	I0906 20:03:47.352248   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.352628   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.352699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.352597   73827 retry.go:31] will retry after 202.870091ms: waiting for machine to come up
	I0906 20:03:46.114675   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:03:46.114711   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115092   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:03:46.115118   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115306   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:03:46.117092   72322 machine.go:96] duration metric: took 4m37.429712277s to provisionDockerMachine
	I0906 20:03:46.117135   72322 fix.go:56] duration metric: took 4m37.451419912s for fixHost
	I0906 20:03:46.117144   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 4m37.45145595s
	W0906 20:03:46.117167   72322 start.go:714] error starting host: provision: host is not running
	W0906 20:03:46.117242   72322 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0906 20:03:46.117252   72322 start.go:729] Will try again in 5 seconds ...
	I0906 20:03:47.557228   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.557656   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.557682   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.557606   73827 retry.go:31] will retry after 357.664781ms: waiting for machine to come up
	I0906 20:03:47.917575   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.918041   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.918068   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.918005   73827 retry.go:31] will retry after 338.480268ms: waiting for machine to come up
	I0906 20:03:48.258631   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.259269   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.259305   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.259229   73827 retry.go:31] will retry after 554.173344ms: waiting for machine to come up
	I0906 20:03:48.814947   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.815491   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.815523   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.815449   73827 retry.go:31] will retry after 601.029419ms: waiting for machine to come up
	I0906 20:03:49.418253   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:49.418596   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:49.418623   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:49.418548   73827 retry.go:31] will retry after 656.451458ms: waiting for machine to come up
	I0906 20:03:50.076488   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:50.076908   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:50.076928   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:50.076875   73827 retry.go:31] will retry after 1.13800205s: waiting for machine to come up
	I0906 20:03:51.216380   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:51.216801   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:51.216831   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:51.216758   73827 retry.go:31] will retry after 1.071685673s: waiting for machine to come up
	I0906 20:03:52.289760   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:52.290174   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:52.290202   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:52.290125   73827 retry.go:31] will retry after 1.581761127s: waiting for machine to come up
	I0906 20:03:51.119269   72322 start.go:360] acquireMachinesLock for no-preload-504385: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:03:53.873755   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:53.874150   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:53.874184   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:53.874120   73827 retry.go:31] will retry after 1.99280278s: waiting for machine to come up
	I0906 20:03:55.869267   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:55.869747   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:55.869776   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:55.869685   73827 retry.go:31] will retry after 2.721589526s: waiting for machine to come up
	I0906 20:03:58.594012   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:58.594402   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:58.594428   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:58.594354   73827 retry.go:31] will retry after 2.763858077s: waiting for machine to come up
	I0906 20:04:01.359424   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:01.359775   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:04:01.359809   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:04:01.359736   73827 retry.go:31] will retry after 3.822567166s: waiting for machine to come up
	I0906 20:04:06.669858   72867 start.go:364] duration metric: took 4m9.363403512s to acquireMachinesLock for "default-k8s-diff-port-653828"
	I0906 20:04:06.669929   72867 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:06.669938   72867 fix.go:54] fixHost starting: 
	I0906 20:04:06.670353   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:06.670393   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:06.688290   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44215
	I0906 20:04:06.688752   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:06.689291   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:04:06.689314   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:06.689692   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:06.689886   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:06.690048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:04:06.691557   72867 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653828: state=Stopped err=<nil>
	I0906 20:04:06.691592   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	W0906 20:04:06.691742   72867 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:06.693924   72867 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653828" ...
	I0906 20:04:06.694965   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Start
	I0906 20:04:06.695148   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring networks are active...
	I0906 20:04:06.695900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network default is active
	I0906 20:04:06.696316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network mk-default-k8s-diff-port-653828 is active
	I0906 20:04:06.696698   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Getting domain xml...
	I0906 20:04:06.697469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Creating domain...
	I0906 20:04:05.186782   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187288   72441 main.go:141] libmachine: (embed-certs-458066) Found IP for machine: 192.168.39.118
	I0906 20:04:05.187301   72441 main.go:141] libmachine: (embed-certs-458066) Reserving static IP address...
	I0906 20:04:05.187340   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has current primary IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187764   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.187784   72441 main.go:141] libmachine: (embed-certs-458066) Reserved static IP address: 192.168.39.118
	I0906 20:04:05.187797   72441 main.go:141] libmachine: (embed-certs-458066) DBG | skip adding static IP to network mk-embed-certs-458066 - found existing host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"}
	I0906 20:04:05.187805   72441 main.go:141] libmachine: (embed-certs-458066) Waiting for SSH to be available...
	I0906 20:04:05.187848   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Getting to WaitForSSH function...
	I0906 20:04:05.190229   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190546   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.190576   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190643   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH client type: external
	I0906 20:04:05.190679   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa (-rw-------)
	I0906 20:04:05.190714   72441 main.go:141] libmachine: (embed-certs-458066) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:05.190727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | About to run SSH command:
	I0906 20:04:05.190761   72441 main.go:141] libmachine: (embed-certs-458066) DBG | exit 0
	I0906 20:04:05.317160   72441 main.go:141] libmachine: (embed-certs-458066) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:05.317483   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetConfigRaw
	I0906 20:04:05.318089   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.320559   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.320944   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.320971   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.321225   72441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/config.json ...
	I0906 20:04:05.321445   72441 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:05.321465   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:05.321720   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.323699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.323972   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.324009   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.324126   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.324303   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324444   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324561   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.324706   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.324940   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.324953   72441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:05.437192   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:05.437217   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437479   72441 buildroot.go:166] provisioning hostname "embed-certs-458066"
	I0906 20:04:05.437495   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437665   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.440334   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440705   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.440733   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440925   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.441100   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441260   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441405   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.441573   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.441733   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.441753   72441 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-458066 && echo "embed-certs-458066" | sudo tee /etc/hostname
	I0906 20:04:05.566958   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-458066
	
	I0906 20:04:05.566986   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.569652   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.569984   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.570014   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.570158   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.570342   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570504   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570648   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.570838   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.571042   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.571060   72441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-458066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-458066/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-458066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:05.689822   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:05.689855   72441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:05.689882   72441 buildroot.go:174] setting up certificates
	I0906 20:04:05.689891   72441 provision.go:84] configureAuth start
	I0906 20:04:05.689899   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.690182   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.692758   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693151   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.693172   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693308   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.695364   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.695754   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695909   72441 provision.go:143] copyHostCerts
	I0906 20:04:05.695957   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:05.695975   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:05.696042   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:05.696123   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:05.696130   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:05.696153   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:05.696248   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:05.696257   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:05.696280   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:05.696329   72441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-458066 san=[127.0.0.1 192.168.39.118 embed-certs-458066 localhost minikube]
	I0906 20:04:06.015593   72441 provision.go:177] copyRemoteCerts
	I0906 20:04:06.015656   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:06.015683   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.018244   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018598   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.018630   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018784   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.018990   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.019169   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.019278   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.110170   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 20:04:06.136341   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:06.161181   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:06.184758   72441 provision.go:87] duration metric: took 494.857261ms to configureAuth
	I0906 20:04:06.184786   72441 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:06.184986   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:06.185049   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.187564   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.187955   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.187978   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.188153   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.188399   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188571   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.188920   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.189070   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.189084   72441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:06.425480   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:06.425518   72441 machine.go:96] duration metric: took 1.104058415s to provisionDockerMachine
	I0906 20:04:06.425535   72441 start.go:293] postStartSetup for "embed-certs-458066" (driver="kvm2")
	I0906 20:04:06.425548   72441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:06.425572   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.425893   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:06.425919   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.428471   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428768   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.428794   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428928   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.429109   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.429283   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.429419   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.515180   72441 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:06.519357   72441 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:06.519390   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:06.519464   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:06.519540   72441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:06.519625   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:06.528542   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:06.552463   72441 start.go:296] duration metric: took 126.912829ms for postStartSetup
	I0906 20:04:06.552514   72441 fix.go:56] duration metric: took 20.435203853s for fixHost
	I0906 20:04:06.552540   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.554994   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555521   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.555556   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555739   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.555937   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556095   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556253   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.556409   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.556600   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.556613   72441 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:06.669696   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653046.632932221
	
	I0906 20:04:06.669720   72441 fix.go:216] guest clock: 1725653046.632932221
	I0906 20:04:06.669730   72441 fix.go:229] Guest: 2024-09-06 20:04:06.632932221 +0000 UTC Remote: 2024-09-06 20:04:06.552518521 +0000 UTC m=+289.061134864 (delta=80.4137ms)
	I0906 20:04:06.669761   72441 fix.go:200] guest clock delta is within tolerance: 80.4137ms
	I0906 20:04:06.669769   72441 start.go:83] releasing machines lock for "embed-certs-458066", held for 20.552490687s
	I0906 20:04:06.669801   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.670060   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:06.673015   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673405   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.673433   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673599   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674041   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674210   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674304   72441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:06.674351   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.674414   72441 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:06.674437   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.676916   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677063   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677314   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677341   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677481   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677513   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677686   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677691   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677864   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677878   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678013   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678025   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.678191   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.758176   72441 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:06.782266   72441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:06.935469   72441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:06.941620   72441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:06.941680   72441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:06.957898   72441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:06.957927   72441 start.go:495] detecting cgroup driver to use...
	I0906 20:04:06.957995   72441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:06.978574   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:06.993967   72441 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:06.994035   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:07.008012   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:07.022073   72441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:07.133622   72441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:07.291402   72441 docker.go:233] disabling docker service ...
	I0906 20:04:07.291478   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:07.306422   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:07.321408   72441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:07.442256   72441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:07.564181   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:07.579777   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:07.599294   72441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:07.599361   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.610457   72441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:07.610555   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.621968   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.633527   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.645048   72441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:07.659044   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.670526   72441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.689465   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.701603   72441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:07.712085   72441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:07.712144   72441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:07.728406   72441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:07.739888   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:07.862385   72441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:07.954721   72441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:07.954792   72441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:07.959478   72441 start.go:563] Will wait 60s for crictl version
	I0906 20:04:07.959545   72441 ssh_runner.go:195] Run: which crictl
	I0906 20:04:07.963893   72441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:08.003841   72441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:08.003917   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.032191   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.063563   72441 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:07.961590   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting to get IP...
	I0906 20:04:07.962441   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962859   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962923   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:07.962841   73982 retry.go:31] will retry after 292.508672ms: waiting for machine to come up
	I0906 20:04:08.257346   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257845   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257867   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.257815   73982 retry.go:31] will retry after 265.967606ms: waiting for machine to come up
	I0906 20:04:08.525352   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525907   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.525834   73982 retry.go:31] will retry after 308.991542ms: waiting for machine to come up
	I0906 20:04:08.836444   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837021   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837053   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.836973   73982 retry.go:31] will retry after 483.982276ms: waiting for machine to come up
	I0906 20:04:09.322661   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323161   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323184   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.323125   73982 retry.go:31] will retry after 574.860867ms: waiting for machine to come up
	I0906 20:04:09.899849   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900228   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.900187   73982 retry.go:31] will retry after 769.142372ms: waiting for machine to come up
	I0906 20:04:10.671316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671796   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671853   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:10.671771   73982 retry.go:31] will retry after 720.232224ms: waiting for machine to come up
	I0906 20:04:11.393120   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393502   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393534   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:11.393447   73982 retry.go:31] will retry after 975.812471ms: waiting for machine to come up
	I0906 20:04:08.064907   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:08.067962   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068410   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:08.068442   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068626   72441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:08.072891   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:08.086275   72441 kubeadm.go:883] updating cluster {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:08.086383   72441 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:08.086423   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:08.123100   72441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:08.123158   72441 ssh_runner.go:195] Run: which lz4
	I0906 20:04:08.127330   72441 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:08.131431   72441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:08.131466   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:09.584066   72441 crio.go:462] duration metric: took 1.456765631s to copy over tarball
	I0906 20:04:09.584131   72441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:11.751911   72441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.167751997s)
	I0906 20:04:11.751949   72441 crio.go:469] duration metric: took 2.167848466s to extract the tarball
	I0906 20:04:11.751959   72441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:11.790385   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:11.831973   72441 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:11.831995   72441 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:11.832003   72441 kubeadm.go:934] updating node { 192.168.39.118 8443 v1.31.0 crio true true} ...
	I0906 20:04:11.832107   72441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-458066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:11.832166   72441 ssh_runner.go:195] Run: crio config
	I0906 20:04:11.881946   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:11.881973   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:11.882000   72441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:11.882028   72441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-458066 NodeName:embed-certs-458066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:11.882186   72441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-458066"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:11.882266   72441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:11.892537   72441 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:11.892617   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:11.902278   72441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0906 20:04:11.920451   72441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:11.938153   72441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0906 20:04:11.957510   72441 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:11.961364   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:11.973944   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:12.109677   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:12.126348   72441 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066 for IP: 192.168.39.118
	I0906 20:04:12.126378   72441 certs.go:194] generating shared ca certs ...
	I0906 20:04:12.126399   72441 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:12.126562   72441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:12.126628   72441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:12.126642   72441 certs.go:256] generating profile certs ...
	I0906 20:04:12.126751   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/client.key
	I0906 20:04:12.126843   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key.c10a03b1
	I0906 20:04:12.126904   72441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key
	I0906 20:04:12.127063   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:12.127111   72441 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:12.127123   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:12.127153   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:12.127189   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:12.127218   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:12.127268   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:12.128117   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:12.185978   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:12.218124   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:12.254546   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:12.290098   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0906 20:04:12.317923   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:12.341186   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:12.363961   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:04:12.388000   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:12.418618   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:12.442213   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:12.465894   72441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:12.482404   72441 ssh_runner.go:195] Run: openssl version
	I0906 20:04:12.488370   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:12.499952   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504565   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504619   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.510625   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:12.522202   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:12.370306   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370743   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370779   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:12.370688   73982 retry.go:31] will retry after 1.559820467s: waiting for machine to come up
	I0906 20:04:13.932455   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933042   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933072   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:13.932985   73982 retry.go:31] will retry after 1.968766852s: waiting for machine to come up
	I0906 20:04:15.903304   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903826   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903855   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:15.903775   73982 retry.go:31] will retry after 2.738478611s: waiting for machine to come up
	I0906 20:04:12.533501   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538229   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538284   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.544065   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:12.555220   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:12.566402   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571038   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571093   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.577057   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:12.588056   72441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:12.592538   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:12.598591   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:12.604398   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:12.610502   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:12.616513   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:12.622859   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:12.628975   72441 kubeadm.go:392] StartCluster: {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:12.629103   72441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:12.629154   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.667699   72441 cri.go:89] found id: ""
	I0906 20:04:12.667764   72441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:12.678070   72441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:12.678092   72441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:12.678148   72441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:12.687906   72441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:12.688889   72441 kubeconfig.go:125] found "embed-certs-458066" server: "https://192.168.39.118:8443"
	I0906 20:04:12.690658   72441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:12.700591   72441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.118
	I0906 20:04:12.700623   72441 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:12.700635   72441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:12.700675   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.741471   72441 cri.go:89] found id: ""
	I0906 20:04:12.741553   72441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:12.757877   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:12.767729   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:12.767748   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:12.767800   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:12.777094   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:12.777157   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:12.786356   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:12.795414   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:12.795470   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:12.804727   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.813481   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:12.813534   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.822844   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:12.831877   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:12.831930   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:12.841082   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:12.850560   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:12.975888   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:13.850754   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.064392   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.140680   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.239317   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:14.239411   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:14.740313   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.240388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.740388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.755429   72441 api_server.go:72] duration metric: took 1.516111342s to wait for apiserver process to appear ...
	I0906 20:04:15.755462   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:15.755483   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.544772   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.544807   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.544824   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.596487   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.596546   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.755752   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.761917   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:18.761946   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.256512   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.265937   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.265973   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.756568   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.763581   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.763606   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:20.256237   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:20.262036   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:04:20.268339   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:20.268364   72441 api_server.go:131] duration metric: took 4.512894792s to wait for apiserver health ...
	I0906 20:04:20.268372   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:20.268378   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:20.270262   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:18.644597   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645056   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645088   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:18.644992   73982 retry.go:31] will retry after 2.982517528s: waiting for machine to come up
	I0906 20:04:21.631028   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631392   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631414   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:21.631367   73982 retry.go:31] will retry after 3.639469531s: waiting for machine to come up
	I0906 20:04:20.271474   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:20.282996   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:20.303957   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:20.315560   72441 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:20.315602   72441 system_pods.go:61] "coredns-6f6b679f8f-v6z7z" [b2c18dba-1210-4e95-a705-95abceca92f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:20.315611   72441 system_pods.go:61] "etcd-embed-certs-458066" [cf60e7c7-1801-42c7-be25-85242c22a5d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:20.315619   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [48c684ec-f93f-49ec-868b-6e7bc20ad506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:20.315625   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [1d55b520-2d8f-4517-a491-8193eaff5d89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:20.315631   72441 system_pods.go:61] "kube-proxy-crvq7" [f0610684-81ee-426a-adc2-aea80faab822] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:20.315639   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [d8744325-58f2-43a8-9a93-516b5a6fb989] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:20.315644   72441 system_pods.go:61] "metrics-server-6867b74b74-gtg94" [600e9c90-20db-407e-b586-fae3809d87b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:20.315649   72441 system_pods.go:61] "storage-provisioner" [1efe7188-2d33-4a29-afbe-823adbef73b3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:20.315657   72441 system_pods.go:74] duration metric: took 11.674655ms to wait for pod list to return data ...
	I0906 20:04:20.315665   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:20.318987   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:20.319012   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:20.319023   72441 node_conditions.go:105] duration metric: took 3.354197ms to run NodePressure ...
	I0906 20:04:20.319038   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:20.600925   72441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607562   72441 kubeadm.go:739] kubelet initialised
	I0906 20:04:20.607590   72441 kubeadm.go:740] duration metric: took 6.637719ms waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607602   72441 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:20.611592   72441 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:26.558023   73230 start.go:364] duration metric: took 3m30.994815351s to acquireMachinesLock for "old-k8s-version-843298"
	I0906 20:04:26.558087   73230 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:26.558096   73230 fix.go:54] fixHost starting: 
	I0906 20:04:26.558491   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:26.558542   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:26.576511   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0906 20:04:26.576933   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:26.577434   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:04:26.577460   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:26.577794   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:26.577968   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:26.578128   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetState
	I0906 20:04:26.579640   73230 fix.go:112] recreateIfNeeded on old-k8s-version-843298: state=Stopped err=<nil>
	I0906 20:04:26.579674   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	W0906 20:04:26.579829   73230 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:26.581843   73230 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-843298" ...
	I0906 20:04:25.275406   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275902   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Found IP for machine: 192.168.50.16
	I0906 20:04:25.275942   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has current primary IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275955   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserving static IP address...
	I0906 20:04:25.276431   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.276463   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserved static IP address: 192.168.50.16
	I0906 20:04:25.276482   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | skip adding static IP to network mk-default-k8s-diff-port-653828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"}
	I0906 20:04:25.276493   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for SSH to be available...
	I0906 20:04:25.276512   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Getting to WaitForSSH function...
	I0906 20:04:25.278727   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279006   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.279037   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279196   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH client type: external
	I0906 20:04:25.279234   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa (-rw-------)
	I0906 20:04:25.279289   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:25.279312   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | About to run SSH command:
	I0906 20:04:25.279330   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | exit 0
	I0906 20:04:25.405134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:25.405524   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetConfigRaw
	I0906 20:04:25.406134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.408667   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409044   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.409074   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409332   72867 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/config.json ...
	I0906 20:04:25.409513   72867 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:25.409530   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:25.409724   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.411737   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412027   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.412060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412171   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.412362   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412489   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412662   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.412802   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.413045   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.413059   72867 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:25.513313   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:25.513343   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513613   72867 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653828"
	I0906 20:04:25.513644   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513851   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.516515   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.516847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.516895   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.517116   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.517300   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517461   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517574   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.517712   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.517891   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.517905   72867 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653828 && echo "default-k8s-diff-port-653828" | sudo tee /etc/hostname
	I0906 20:04:25.637660   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653828
	
	I0906 20:04:25.637691   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.640258   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640600   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.640626   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640811   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.641001   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641177   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641333   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.641524   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.641732   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.641754   72867 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:25.749746   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:25.749773   72867 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:25.749795   72867 buildroot.go:174] setting up certificates
	I0906 20:04:25.749812   72867 provision.go:84] configureAuth start
	I0906 20:04:25.749828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.750111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.752528   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.752893   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.752920   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.753104   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.755350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755642   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.755666   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755808   72867 provision.go:143] copyHostCerts
	I0906 20:04:25.755858   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:25.755875   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:25.755930   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:25.756017   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:25.756024   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:25.756046   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:25.756129   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:25.756137   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:25.756155   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:25.756212   72867 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653828 san=[127.0.0.1 192.168.50.16 default-k8s-diff-port-653828 localhost minikube]
	I0906 20:04:25.934931   72867 provision.go:177] copyRemoteCerts
	I0906 20:04:25.935018   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:25.935060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.937539   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.937899   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.937925   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.938111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.938308   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.938469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.938644   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.019666   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:26.043989   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0906 20:04:26.066845   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:04:26.090526   72867 provision.go:87] duration metric: took 340.698646ms to configureAuth
	I0906 20:04:26.090561   72867 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:26.090786   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:26.090878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.093783   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094167   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.094201   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094503   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.094689   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094850   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094975   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.095130   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.095357   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.095389   72867 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:26.324270   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:26.324301   72867 machine.go:96] duration metric: took 914.775498ms to provisionDockerMachine
	I0906 20:04:26.324315   72867 start.go:293] postStartSetup for "default-k8s-diff-port-653828" (driver="kvm2")
	I0906 20:04:26.324328   72867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:26.324350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.324726   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:26.324759   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.327339   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327718   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.327750   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.328147   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.328309   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.328449   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.408475   72867 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:26.413005   72867 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:26.413033   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:26.413107   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:26.413203   72867 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:26.413320   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:26.422811   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:26.449737   72867 start.go:296] duration metric: took 125.408167ms for postStartSetup
	I0906 20:04:26.449772   72867 fix.go:56] duration metric: took 19.779834553s for fixHost
	I0906 20:04:26.449792   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.452589   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.452990   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.453022   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.453323   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.453529   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453710   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.453966   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.454125   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.454136   72867 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:26.557844   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653066.531604649
	
	I0906 20:04:26.557875   72867 fix.go:216] guest clock: 1725653066.531604649
	I0906 20:04:26.557884   72867 fix.go:229] Guest: 2024-09-06 20:04:26.531604649 +0000 UTC Remote: 2024-09-06 20:04:26.449775454 +0000 UTC m=+269.281822801 (delta=81.829195ms)
	I0906 20:04:26.557904   72867 fix.go:200] guest clock delta is within tolerance: 81.829195ms
	I0906 20:04:26.557909   72867 start.go:83] releasing machines lock for "default-k8s-diff-port-653828", held for 19.888002519s
	I0906 20:04:26.557943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.558256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:26.561285   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561705   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.561732   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562425   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562628   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562732   72867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:26.562782   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.562920   72867 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:26.562950   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.565587   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.565970   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566149   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566331   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.566542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.566605   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566633   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566744   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.566756   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566992   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.567145   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.567302   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.672529   72867 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:26.678762   72867 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:26.825625   72867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:26.832290   72867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:26.832363   72867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:26.848802   72867 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:26.848824   72867 start.go:495] detecting cgroup driver to use...
	I0906 20:04:26.848917   72867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:26.864986   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:26.878760   72867 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:26.878813   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:26.893329   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:26.909090   72867 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:27.025534   72867 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:27.190190   72867 docker.go:233] disabling docker service ...
	I0906 20:04:27.190293   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:22.617468   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:24.618561   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.118448   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.204700   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:27.217880   72867 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:27.346599   72867 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:27.466601   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:27.480785   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:27.501461   72867 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:27.501523   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.511815   72867 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:27.511868   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.521806   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.532236   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.542227   72867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:27.552389   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.563462   72867 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.583365   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.594465   72867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:27.605074   72867 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:27.605140   72867 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:27.618702   72867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:27.630566   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:27.748387   72867 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:27.841568   72867 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:27.841652   72867 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:27.846880   72867 start.go:563] Will wait 60s for crictl version
	I0906 20:04:27.846936   72867 ssh_runner.go:195] Run: which crictl
	I0906 20:04:27.851177   72867 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:27.895225   72867 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:27.895327   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.934388   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.966933   72867 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:26.583194   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .Start
	I0906 20:04:26.583341   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring networks are active...
	I0906 20:04:26.584046   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network default is active
	I0906 20:04:26.584420   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network mk-old-k8s-version-843298 is active
	I0906 20:04:26.584851   73230 main.go:141] libmachine: (old-k8s-version-843298) Getting domain xml...
	I0906 20:04:26.585528   73230 main.go:141] libmachine: (old-k8s-version-843298) Creating domain...
	I0906 20:04:27.874281   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting to get IP...
	I0906 20:04:27.875189   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:27.875762   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:27.875844   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:27.875754   74166 retry.go:31] will retry after 289.364241ms: waiting for machine to come up
	I0906 20:04:28.166932   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.167349   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.167375   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.167303   74166 retry.go:31] will retry after 317.106382ms: waiting for machine to come up
	I0906 20:04:28.485664   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.486147   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.486241   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.486199   74166 retry.go:31] will retry after 401.712201ms: waiting for machine to come up
	I0906 20:04:28.890039   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.890594   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.890621   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.890540   74166 retry.go:31] will retry after 570.418407ms: waiting for machine to come up
	I0906 20:04:29.462983   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:29.463463   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:29.463489   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:29.463428   74166 retry.go:31] will retry after 696.361729ms: waiting for machine to come up
	I0906 20:04:30.161305   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:30.161829   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:30.161876   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:30.161793   74166 retry.go:31] will retry after 896.800385ms: waiting for machine to come up
	I0906 20:04:27.968123   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:27.971448   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.971880   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:27.971904   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.972128   72867 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:27.981160   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:27.994443   72867 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653
828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:27.994575   72867 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:27.994635   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:28.043203   72867 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:28.043285   72867 ssh_runner.go:195] Run: which lz4
	I0906 20:04:28.048798   72867 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:28.053544   72867 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:28.053577   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:29.490070   72867 crio.go:462] duration metric: took 1.441303819s to copy over tarball
	I0906 20:04:29.490142   72867 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:31.649831   72867 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159650072s)
	I0906 20:04:31.649870   72867 crio.go:469] duration metric: took 2.159772826s to extract the tarball
	I0906 20:04:31.649880   72867 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:31.686875   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:31.729557   72867 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:31.729580   72867 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:31.729587   72867 kubeadm.go:934] updating node { 192.168.50.16 8444 v1.31.0 crio true true} ...
	I0906 20:04:31.729698   72867 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:31.729799   72867 ssh_runner.go:195] Run: crio config
	I0906 20:04:31.777272   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:31.777299   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:31.777316   72867 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:31.777336   72867 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.16 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653828 NodeName:default-k8s-diff-port-653828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:31.777509   72867 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.16
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653828"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:31.777577   72867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:31.788008   72867 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:31.788070   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:31.798261   72867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0906 20:04:31.815589   72867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:31.832546   72867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0906 20:04:31.849489   72867 ssh_runner.go:195] Run: grep 192.168.50.16	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:31.853452   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:31.866273   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:31.984175   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:32.001110   72867 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828 for IP: 192.168.50.16
	I0906 20:04:32.001139   72867 certs.go:194] generating shared ca certs ...
	I0906 20:04:32.001160   72867 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:32.001343   72867 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:32.001399   72867 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:32.001413   72867 certs.go:256] generating profile certs ...
	I0906 20:04:32.001509   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/client.key
	I0906 20:04:32.001613   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key.01951d83
	I0906 20:04:32.001665   72867 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key
	I0906 20:04:32.001815   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:32.001866   72867 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:32.001880   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:32.001913   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:32.001933   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:32.001962   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:32.002001   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:32.002812   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:32.037177   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:32.078228   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:32.117445   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:32.153039   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 20:04:32.186458   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:28.120786   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:28.120826   72441 pod_ready.go:82] duration metric: took 7.509209061s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:28.120842   72441 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:30.129518   72441 pod_ready.go:103] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:31.059799   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.060272   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.060294   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.060226   74166 retry.go:31] will retry after 841.627974ms: waiting for machine to come up
	I0906 20:04:31.903823   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.904258   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.904280   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.904238   74166 retry.go:31] will retry after 1.274018797s: waiting for machine to come up
	I0906 20:04:33.179723   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:33.180090   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:33.180133   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:33.180059   74166 retry.go:31] will retry after 1.496142841s: waiting for machine to come up
	I0906 20:04:34.678209   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:34.678697   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:34.678726   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:34.678652   74166 retry.go:31] will retry after 1.795101089s: waiting for machine to come up
	I0906 20:04:32.216815   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:32.245378   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:32.272163   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:32.297017   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:32.321514   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:32.345724   72867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:32.362488   72867 ssh_runner.go:195] Run: openssl version
	I0906 20:04:32.368722   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:32.380099   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384777   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384834   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.392843   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:32.405716   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:32.417043   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422074   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422143   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.427946   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:32.439430   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:32.450466   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455056   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455114   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.460970   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:32.471978   72867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:32.476838   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:32.483008   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:32.489685   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:32.496446   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:32.502841   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:32.509269   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:32.515687   72867 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:32.515791   72867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:32.515853   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.567687   72867 cri.go:89] found id: ""
	I0906 20:04:32.567763   72867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:32.578534   72867 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:32.578552   72867 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:32.578598   72867 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:32.588700   72867 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:32.589697   72867 kubeconfig.go:125] found "default-k8s-diff-port-653828" server: "https://192.168.50.16:8444"
	I0906 20:04:32.591739   72867 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:32.601619   72867 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.16
	I0906 20:04:32.601649   72867 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:32.601659   72867 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:32.601724   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.640989   72867 cri.go:89] found id: ""
	I0906 20:04:32.641056   72867 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:32.659816   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:32.670238   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:32.670274   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:32.670327   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:04:32.679687   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:32.679778   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:32.689024   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:04:32.698403   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:32.698465   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:32.707806   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.717015   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:32.717105   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.726408   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:04:32.735461   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:32.735538   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:32.744701   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:32.754202   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:32.874616   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.759668   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.984693   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.051998   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.155274   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:34.155384   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:34.655749   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.156069   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.656120   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.672043   72867 api_server.go:72] duration metric: took 1.516769391s to wait for apiserver process to appear ...
	I0906 20:04:35.672076   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:35.672099   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:32.628208   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.628235   72441 pod_ready.go:82] duration metric: took 4.507383414s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.628248   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633941   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.633965   72441 pod_ready.go:82] duration metric: took 5.709738ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633975   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639227   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.639249   72441 pod_ready.go:82] duration metric: took 5.26842ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639259   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644664   72441 pod_ready.go:93] pod "kube-proxy-crvq7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.644690   72441 pod_ready.go:82] duration metric: took 5.423551ms for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644701   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650000   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.650022   72441 pod_ready.go:82] duration metric: took 5.312224ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650034   72441 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:34.657709   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:37.157744   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:38.092386   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.092429   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.092448   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.129071   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.129110   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.172277   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.213527   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.213573   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:38.673103   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.677672   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.677704   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.172237   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.179638   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:39.179670   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.672801   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.678523   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:04:39.688760   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:39.688793   72867 api_server.go:131] duration metric: took 4.016709147s to wait for apiserver health ...
	I0906 20:04:39.688804   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:39.688812   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:39.690721   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:36.474937   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:36.475399   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:36.475497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:36.475351   74166 retry.go:31] will retry after 1.918728827s: waiting for machine to come up
	I0906 20:04:38.397024   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:38.397588   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:38.397617   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:38.397534   74166 retry.go:31] will retry after 3.460427722s: waiting for machine to come up
	I0906 20:04:39.692055   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:39.707875   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:39.728797   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:39.740514   72867 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:39.740553   72867 system_pods.go:61] "coredns-6f6b679f8f-mvwth" [53675f76-d849-471c-9cd1-561e2f8e6499] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:39.740562   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [f69c9488-87d4-487e-902b-588182c2e2e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:39.740567   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [d641f983-776e-4102-81a3-ba3cf49911a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:39.740579   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [1b09e88d-b038-42d3-9c36-4eee1eff1c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:39.740585   72867 system_pods.go:61] "kube-proxy-9wlq4" [5254a977-ded3-439d-8db0-cd54ccd96940] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:39.740590   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [f8c16cf5-2c76-428f-83de-e79c49566683] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:39.740594   72867 system_pods.go:61] "metrics-server-6867b74b74-dds56" [6219eb1e-2904-487c-b4ed-d786a0627281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:39.740598   72867 system_pods.go:61] "storage-provisioner" [58dd82cd-e250-4f57-97ad-55408f001cc3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:39.740605   72867 system_pods.go:74] duration metric: took 11.784722ms to wait for pod list to return data ...
	I0906 20:04:39.740614   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:39.745883   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:39.745913   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:39.745923   72867 node_conditions.go:105] duration metric: took 5.304169ms to run NodePressure ...
	I0906 20:04:39.745945   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:40.031444   72867 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036537   72867 kubeadm.go:739] kubelet initialised
	I0906 20:04:40.036556   72867 kubeadm.go:740] duration metric: took 5.087185ms waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036563   72867 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:40.044926   72867 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:42.050947   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:39.657641   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:42.156327   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:41.860109   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:41.860612   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:41.860640   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:41.860560   74166 retry.go:31] will retry after 4.509018672s: waiting for machine to come up
	I0906 20:04:44.051148   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.554068   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:44.157427   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.656559   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:47.793833   72322 start.go:364] duration metric: took 56.674519436s to acquireMachinesLock for "no-preload-504385"
	I0906 20:04:47.793890   72322 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:47.793898   72322 fix.go:54] fixHost starting: 
	I0906 20:04:47.794329   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:47.794363   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:47.812048   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0906 20:04:47.812496   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:47.813081   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:04:47.813109   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:47.813446   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:47.813741   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:04:47.813945   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:04:47.815314   72322 fix.go:112] recreateIfNeeded on no-preload-504385: state=Stopped err=<nil>
	I0906 20:04:47.815338   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	W0906 20:04:47.815507   72322 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:47.817424   72322 out.go:177] * Restarting existing kvm2 VM for "no-preload-504385" ...
	I0906 20:04:47.818600   72322 main.go:141] libmachine: (no-preload-504385) Calling .Start
	I0906 20:04:47.818760   72322 main.go:141] libmachine: (no-preload-504385) Ensuring networks are active...
	I0906 20:04:47.819569   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network default is active
	I0906 20:04:47.819883   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network mk-no-preload-504385 is active
	I0906 20:04:47.820233   72322 main.go:141] libmachine: (no-preload-504385) Getting domain xml...
	I0906 20:04:47.821002   72322 main.go:141] libmachine: (no-preload-504385) Creating domain...
	I0906 20:04:46.374128   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374599   73230 main.go:141] libmachine: (old-k8s-version-843298) Found IP for machine: 192.168.72.30
	I0906 20:04:46.374629   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has current primary IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374642   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserving static IP address...
	I0906 20:04:46.375045   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.375071   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | skip adding static IP to network mk-old-k8s-version-843298 - found existing host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"}
	I0906 20:04:46.375081   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserved static IP address: 192.168.72.30
	I0906 20:04:46.375104   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting for SSH to be available...
	I0906 20:04:46.375119   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Getting to WaitForSSH function...
	I0906 20:04:46.377497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377836   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.377883   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377956   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH client type: external
	I0906 20:04:46.377982   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa (-rw-------)
	I0906 20:04:46.378028   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:46.378044   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | About to run SSH command:
	I0906 20:04:46.378054   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | exit 0
	I0906 20:04:46.505025   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:46.505386   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 20:04:46.506031   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.508401   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.508787   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.508827   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.509092   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:04:46.509321   73230 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:46.509339   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:46.509549   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.511816   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512230   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.512265   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512436   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.512618   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512794   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512932   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.513123   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.513364   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.513378   73230 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:46.629437   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:46.629469   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629712   73230 buildroot.go:166] provisioning hostname "old-k8s-version-843298"
	I0906 20:04:46.629731   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629910   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.632226   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632620   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.632653   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632817   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.633009   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633204   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633364   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.633544   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.633758   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.633779   73230 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-843298 && echo "old-k8s-version-843298" | sudo tee /etc/hostname
	I0906 20:04:46.764241   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-843298
	
	I0906 20:04:46.764271   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.766678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767063   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.767092   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767236   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.767414   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767591   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767740   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.767874   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.768069   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.768088   73230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-843298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-843298/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-843298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:46.890399   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:46.890424   73230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:46.890461   73230 buildroot.go:174] setting up certificates
	I0906 20:04:46.890471   73230 provision.go:84] configureAuth start
	I0906 20:04:46.890479   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.890714   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.893391   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893765   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.893802   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893942   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.896173   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896505   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.896524   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896688   73230 provision.go:143] copyHostCerts
	I0906 20:04:46.896741   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:46.896756   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:46.896814   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:46.896967   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:46.896977   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:46.897008   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:46.897096   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:46.897104   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:46.897133   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:46.897193   73230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-843298 san=[127.0.0.1 192.168.72.30 localhost minikube old-k8s-version-843298]
	I0906 20:04:47.128570   73230 provision.go:177] copyRemoteCerts
	I0906 20:04:47.128627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:47.128653   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.131548   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.131952   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.131981   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.132164   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.132396   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.132571   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.132705   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.223745   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:47.249671   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0906 20:04:47.274918   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:47.300351   73230 provision.go:87] duration metric: took 409.869395ms to configureAuth
	I0906 20:04:47.300376   73230 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:47.300584   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:04:47.300673   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.303255   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303559   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.303581   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303739   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.303943   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304098   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304266   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.304407   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.304623   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.304644   73230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:47.539793   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:47.539824   73230 machine.go:96] duration metric: took 1.030489839s to provisionDockerMachine
	I0906 20:04:47.539836   73230 start.go:293] postStartSetup for "old-k8s-version-843298" (driver="kvm2")
	I0906 20:04:47.539849   73230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:47.539884   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.540193   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:47.540220   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.543190   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543482   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.543506   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543707   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.543938   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.544097   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.544243   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.633100   73230 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:47.637336   73230 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:47.637368   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:47.637459   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:47.637541   73230 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:47.637627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:47.648442   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:47.672907   73230 start.go:296] duration metric: took 133.055727ms for postStartSetup
	I0906 20:04:47.672951   73230 fix.go:56] duration metric: took 21.114855209s for fixHost
	I0906 20:04:47.672978   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.675459   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.675833   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.675863   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.676005   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.676303   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676471   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676661   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.676846   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.677056   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.677070   73230 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:47.793647   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653087.750926682
	
	I0906 20:04:47.793671   73230 fix.go:216] guest clock: 1725653087.750926682
	I0906 20:04:47.793681   73230 fix.go:229] Guest: 2024-09-06 20:04:47.750926682 +0000 UTC Remote: 2024-09-06 20:04:47.67295613 +0000 UTC m=+232.250384025 (delta=77.970552ms)
	I0906 20:04:47.793735   73230 fix.go:200] guest clock delta is within tolerance: 77.970552ms
	I0906 20:04:47.793746   73230 start.go:83] releasing machines lock for "old-k8s-version-843298", held for 21.235682628s
	I0906 20:04:47.793778   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.794059   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:47.796792   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797195   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.797229   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797425   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798019   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798230   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798314   73230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:47.798360   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.798488   73230 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:47.798509   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.801253   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801632   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.801658   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801867   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802060   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802122   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.802152   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.802210   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802318   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802460   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802504   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.802580   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802722   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.886458   73230 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:47.910204   73230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:48.055661   73230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:48.063024   73230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:48.063090   73230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:48.084749   73230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:48.084771   73230 start.go:495] detecting cgroup driver to use...
	I0906 20:04:48.084892   73230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:48.105494   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:48.123487   73230 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:48.123564   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:48.145077   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:48.161336   73230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:48.283568   73230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:48.445075   73230 docker.go:233] disabling docker service ...
	I0906 20:04:48.445146   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:48.461122   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:48.475713   73230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:48.632804   73230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:48.762550   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:48.778737   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:48.798465   73230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:04:48.798549   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.811449   73230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:48.811523   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.824192   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.835598   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.847396   73230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:48.860005   73230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:48.871802   73230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:48.871864   73230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:48.887596   73230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:48.899508   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:49.041924   73230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:49.144785   73230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:49.144885   73230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:49.150404   73230 start.go:563] Will wait 60s for crictl version
	I0906 20:04:49.150461   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:49.154726   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:49.202450   73230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:49.202557   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.235790   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.270094   73230 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0906 20:04:49.271457   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:49.274710   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275114   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:49.275139   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275475   73230 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:49.280437   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:49.293664   73230 kubeadm.go:883] updating cluster {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:49.293793   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:04:49.293842   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:49.348172   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:49.348251   73230 ssh_runner.go:195] Run: which lz4
	I0906 20:04:49.352703   73230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:49.357463   73230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:49.357501   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0906 20:04:49.056116   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:51.553185   72867 pod_ready.go:93] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.553217   72867 pod_ready.go:82] duration metric: took 11.508264695s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.553231   72867 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563758   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.563788   72867 pod_ready.go:82] duration metric: took 10.547437ms for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563802   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570906   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.570940   72867 pod_ready.go:82] duration metric: took 7.128595ms for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570957   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:48.657527   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:50.662561   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:49.146755   72322 main.go:141] libmachine: (no-preload-504385) Waiting to get IP...
	I0906 20:04:49.147780   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.148331   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.148406   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.148309   74322 retry.go:31] will retry after 250.314453ms: waiting for machine to come up
	I0906 20:04:49.399920   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.400386   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.400468   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.400345   74322 retry.go:31] will retry after 247.263156ms: waiting for machine to come up
	I0906 20:04:49.648894   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.649420   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.649445   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.649376   74322 retry.go:31] will retry after 391.564663ms: waiting for machine to come up
	I0906 20:04:50.043107   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.043594   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.043617   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.043548   74322 retry.go:31] will retry after 513.924674ms: waiting for machine to come up
	I0906 20:04:50.559145   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.559637   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.559675   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.559543   74322 retry.go:31] will retry after 551.166456ms: waiting for machine to come up
	I0906 20:04:51.111906   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.112967   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.112999   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.112921   74322 retry.go:31] will retry after 653.982425ms: waiting for machine to come up
	I0906 20:04:51.768950   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.769466   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.769496   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.769419   74322 retry.go:31] will retry after 935.670438ms: waiting for machine to come up
	I0906 20:04:52.706493   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:52.707121   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:52.707152   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:52.707062   74322 retry.go:31] will retry after 1.141487289s: waiting for machine to come up
	I0906 20:04:51.190323   73230 crio.go:462] duration metric: took 1.837657617s to copy over tarball
	I0906 20:04:51.190410   73230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:54.320754   73230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.130319477s)
	I0906 20:04:54.320778   73230 crio.go:469] duration metric: took 3.130424981s to extract the tarball
	I0906 20:04:54.320785   73230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:54.388660   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:54.427475   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:54.427505   73230 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:04:54.427580   73230 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.427594   73230 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.427611   73230 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.427662   73230 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.427691   73230 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.427696   73230 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.427813   73230 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.427672   73230 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0906 20:04:54.429432   73230 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.429443   73230 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.429447   73230 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.429448   73230 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.429475   73230 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.429449   73230 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.429496   73230 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.429589   73230 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 20:04:54.603502   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.607745   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.610516   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.613580   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.616591   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.622381   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 20:04:54.636746   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.690207   73230 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0906 20:04:54.690254   73230 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.690306   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.788758   73230 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0906 20:04:54.788804   73230 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.788876   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.804173   73230 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0906 20:04:54.804228   73230 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.804273   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817005   73230 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0906 20:04:54.817056   73230 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.817074   73230 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0906 20:04:54.817101   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817122   73230 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.817138   73230 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 20:04:54.817167   73230 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 20:04:54.817202   73230 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0906 20:04:54.817213   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817220   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.817227   73230 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.817168   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817253   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817301   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.817333   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902264   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.902422   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902522   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.902569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.902602   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.902654   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:54.902708   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.061686   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.073933   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.085364   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:55.085463   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.085399   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.085610   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:55.085725   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.192872   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:55.196085   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.255204   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.288569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.291461   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0906 20:04:55.291541   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.291559   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0906 20:04:55.291726   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0906 20:04:53.578469   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.578504   72867 pod_ready.go:82] duration metric: took 2.007539423s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.578534   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583560   72867 pod_ready.go:93] pod "kube-proxy-9wlq4" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.583583   72867 pod_ready.go:82] duration metric: took 5.037068ms for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583594   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832422   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:54.832453   72867 pod_ready.go:82] duration metric: took 1.248849975s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832480   72867 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:56.840031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.156842   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:55.236051   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.849822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:53.850213   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:53.850235   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:53.850178   74322 retry.go:31] will retry after 1.858736556s: waiting for machine to come up
	I0906 20:04:55.710052   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:55.710550   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:55.710598   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:55.710496   74322 retry.go:31] will retry after 2.033556628s: waiting for machine to come up
	I0906 20:04:57.745989   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:57.746433   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:57.746459   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:57.746388   74322 retry.go:31] will retry after 1.985648261s: waiting for machine to come up
	I0906 20:04:55.500590   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0906 20:04:55.500702   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 20:04:55.500740   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0906 20:04:55.500824   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0906 20:04:55.500885   73230 cache_images.go:92] duration metric: took 1.07336017s to LoadCachedImages
	W0906 20:04:55.500953   73230 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0906 20:04:55.500969   73230 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.20.0 crio true true} ...
	I0906 20:04:55.501112   73230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-843298 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:55.501192   73230 ssh_runner.go:195] Run: crio config
	I0906 20:04:55.554097   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:04:55.554119   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:55.554135   73230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:55.554154   73230 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-843298 NodeName:old-k8s-version-843298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 20:04:55.554359   73230 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-843298"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:55.554441   73230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0906 20:04:55.565923   73230 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:55.566004   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:55.577366   73230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0906 20:04:55.595470   73230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:55.614641   73230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0906 20:04:55.637739   73230 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:55.642233   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:55.658409   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:55.804327   73230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:55.824288   73230 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298 for IP: 192.168.72.30
	I0906 20:04:55.824308   73230 certs.go:194] generating shared ca certs ...
	I0906 20:04:55.824323   73230 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:55.824479   73230 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:55.824541   73230 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:55.824560   73230 certs.go:256] generating profile certs ...
	I0906 20:04:55.824680   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key
	I0906 20:04:55.824755   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3
	I0906 20:04:55.824799   73230 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key
	I0906 20:04:55.824952   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:55.824995   73230 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:55.825008   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:55.825041   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:55.825072   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:55.825102   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:55.825158   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:55.825878   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:55.868796   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:55.905185   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:55.935398   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:55.973373   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 20:04:56.008496   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:04:56.046017   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:56.080049   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:56.122717   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:56.151287   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:56.184273   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:56.216780   73230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:56.239708   73230 ssh_runner.go:195] Run: openssl version
	I0906 20:04:56.246127   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:56.257597   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262515   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262594   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.269207   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:56.281646   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:56.293773   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299185   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299255   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.305740   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:56.319060   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:56.330840   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336013   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336082   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.342576   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:56.354648   73230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:56.359686   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:56.366321   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:56.372646   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:56.379199   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:56.386208   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:56.392519   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:56.399335   73230 kubeadm.go:392] StartCluster: {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:56.399442   73230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:56.399495   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.441986   73230 cri.go:89] found id: ""
	I0906 20:04:56.442069   73230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:56.454884   73230 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:56.454907   73230 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:56.454977   73230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:56.465647   73230 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:56.466650   73230 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:04:56.467285   73230 kubeconfig.go:62] /home/jenkins/minikube-integration/19576-6021/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-843298" cluster setting kubeconfig missing "old-k8s-version-843298" context setting]
	I0906 20:04:56.468248   73230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:56.565587   73230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:56.576221   73230 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.30
	I0906 20:04:56.576261   73230 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:56.576277   73230 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:56.576342   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.621597   73230 cri.go:89] found id: ""
	I0906 20:04:56.621663   73230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:56.639924   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:56.649964   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:56.649989   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:56.650042   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:56.661290   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:56.661343   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:56.671361   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:56.680865   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:56.680939   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:56.696230   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.706613   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:56.706692   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.719635   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:56.729992   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:56.730045   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:56.740040   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:56.750666   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:56.891897   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.681824   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.972206   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.091751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.206345   73230 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:58.206443   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:58.707412   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.206780   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.707273   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:00.207218   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.340092   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:01.838387   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:57.658033   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:00.157741   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:59.734045   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:59.734565   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:59.734592   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:59.734506   74322 retry.go:31] will retry after 2.767491398s: waiting for machine to come up
	I0906 20:05:02.505314   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:02.505749   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:05:02.505780   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:05:02.505697   74322 retry.go:31] will retry after 3.51382931s: waiting for machine to come up
	I0906 20:05:00.707010   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.206708   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.707125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.207349   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.706670   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.207287   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.706650   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.207125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.707193   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:05.207119   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.838639   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:05.839195   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:02.655906   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:04.656677   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:07.157732   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:06.023595   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024063   72322 main.go:141] libmachine: (no-preload-504385) Found IP for machine: 192.168.61.184
	I0906 20:05:06.024095   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has current primary IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024105   72322 main.go:141] libmachine: (no-preload-504385) Reserving static IP address...
	I0906 20:05:06.024576   72322 main.go:141] libmachine: (no-preload-504385) Reserved static IP address: 192.168.61.184
	I0906 20:05:06.024598   72322 main.go:141] libmachine: (no-preload-504385) Waiting for SSH to be available...
	I0906 20:05:06.024621   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.024643   72322 main.go:141] libmachine: (no-preload-504385) DBG | skip adding static IP to network mk-no-preload-504385 - found existing host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"}
	I0906 20:05:06.024666   72322 main.go:141] libmachine: (no-preload-504385) DBG | Getting to WaitForSSH function...
	I0906 20:05:06.026845   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027166   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.027219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027296   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH client type: external
	I0906 20:05:06.027321   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa (-rw-------)
	I0906 20:05:06.027355   72322 main.go:141] libmachine: (no-preload-504385) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:05:06.027376   72322 main.go:141] libmachine: (no-preload-504385) DBG | About to run SSH command:
	I0906 20:05:06.027403   72322 main.go:141] libmachine: (no-preload-504385) DBG | exit 0
	I0906 20:05:06.148816   72322 main.go:141] libmachine: (no-preload-504385) DBG | SSH cmd err, output: <nil>: 
	I0906 20:05:06.149196   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetConfigRaw
	I0906 20:05:06.149951   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.152588   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.152970   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.153003   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.153238   72322 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/config.json ...
	I0906 20:05:06.153485   72322 machine.go:93] provisionDockerMachine start ...
	I0906 20:05:06.153508   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:06.153714   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.156031   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156394   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.156425   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156562   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.156732   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.156901   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.157051   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.157205   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.157411   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.157425   72322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:05:06.261544   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:05:06.261586   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.261861   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:05:06.261895   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.262063   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.264812   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265192   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.265219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265400   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.265570   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265705   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265856   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.265990   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.266145   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.266157   72322 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-504385 && echo "no-preload-504385" | sudo tee /etc/hostname
	I0906 20:05:06.383428   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-504385
	
	I0906 20:05:06.383456   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.386368   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386722   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.386755   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386968   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.387152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387322   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387439   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.387617   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.387817   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.387840   72322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-504385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-504385/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-504385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:05:06.501805   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:05:06.501836   72322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:05:06.501854   72322 buildroot.go:174] setting up certificates
	I0906 20:05:06.501866   72322 provision.go:84] configureAuth start
	I0906 20:05:06.501873   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.502152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.504721   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505086   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.505115   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505250   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.507420   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507765   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.507795   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507940   72322 provision.go:143] copyHostCerts
	I0906 20:05:06.508008   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:05:06.508031   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:05:06.508087   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:05:06.508175   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:05:06.508183   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:05:06.508208   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:05:06.508297   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:05:06.508307   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:05:06.508338   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:05:06.508406   72322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.no-preload-504385 san=[127.0.0.1 192.168.61.184 localhost minikube no-preload-504385]
	I0906 20:05:06.681719   72322 provision.go:177] copyRemoteCerts
	I0906 20:05:06.681786   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:05:06.681810   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.684460   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684779   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.684822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684962   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.685125   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.685258   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.685368   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:06.767422   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:05:06.794881   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0906 20:05:06.821701   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:05:06.848044   72322 provision.go:87] duration metric: took 346.1664ms to configureAuth
	I0906 20:05:06.848075   72322 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:05:06.848271   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:05:06.848348   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.850743   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851037   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.851064   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851226   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.851395   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851549   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851674   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.851791   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.851993   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.852020   72322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:05:07.074619   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:05:07.074643   72322 machine.go:96] duration metric: took 921.143238ms to provisionDockerMachine
	I0906 20:05:07.074654   72322 start.go:293] postStartSetup for "no-preload-504385" (driver="kvm2")
	I0906 20:05:07.074664   72322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:05:07.074678   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.075017   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:05:07.075042   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.077988   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078268   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.078287   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078449   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.078634   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.078791   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.078946   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.165046   72322 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:05:07.169539   72322 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:05:07.169565   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:05:07.169631   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:05:07.169700   72322 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:05:07.169783   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:05:07.179344   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:07.204213   72322 start.go:296] duration metric: took 129.545341ms for postStartSetup
	I0906 20:05:07.204265   72322 fix.go:56] duration metric: took 19.41036755s for fixHost
	I0906 20:05:07.204287   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.207087   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207473   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.207513   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207695   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.207905   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208090   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208267   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.208436   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:07.208640   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:07.208655   72322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:05:07.314172   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653107.281354639
	
	I0906 20:05:07.314195   72322 fix.go:216] guest clock: 1725653107.281354639
	I0906 20:05:07.314205   72322 fix.go:229] Guest: 2024-09-06 20:05:07.281354639 +0000 UTC Remote: 2024-09-06 20:05:07.204269406 +0000 UTC m=+358.676673749 (delta=77.085233ms)
	I0906 20:05:07.314228   72322 fix.go:200] guest clock delta is within tolerance: 77.085233ms
	I0906 20:05:07.314237   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 19.52037381s
	I0906 20:05:07.314266   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.314552   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:07.317476   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.317839   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.317873   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.318003   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318542   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318716   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318821   72322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:05:07.318876   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.318991   72322 ssh_runner.go:195] Run: cat /version.json
	I0906 20:05:07.319018   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.321880   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322102   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322308   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322340   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322472   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322508   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322550   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322685   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.322713   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322868   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.322875   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.323062   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.323066   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.323221   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.424438   72322 ssh_runner.go:195] Run: systemctl --version
	I0906 20:05:07.430755   72322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:05:07.579436   72322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:05:07.585425   72322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:05:07.585493   72322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:05:07.601437   72322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:05:07.601462   72322 start.go:495] detecting cgroup driver to use...
	I0906 20:05:07.601529   72322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:05:07.620368   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:05:07.634848   72322 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:05:07.634912   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:05:07.648810   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:05:07.664084   72322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:05:07.796601   72322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:05:07.974836   72322 docker.go:233] disabling docker service ...
	I0906 20:05:07.974911   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:05:07.989013   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:05:08.002272   72322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:05:08.121115   72322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:05:08.247908   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:05:08.262855   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:05:08.281662   72322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:05:08.281730   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.292088   72322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:05:08.292165   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.302601   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.313143   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.323852   72322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:05:08.335791   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.347619   72322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.365940   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.376124   72322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:05:08.385677   72322 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:05:08.385743   72322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:05:08.398445   72322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:05:08.408477   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:08.518447   72322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:05:08.613636   72322 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:05:08.613707   72322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:05:08.619050   72322 start.go:563] Will wait 60s for crictl version
	I0906 20:05:08.619134   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:08.622959   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:05:08.668229   72322 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:05:08.668297   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.702416   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.733283   72322 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:05:05.707351   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.206573   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.707452   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.206554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.706854   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.206925   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.707456   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.207200   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.706741   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:10.206605   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.839381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.839918   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.157889   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:11.158761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:08.734700   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:08.737126   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737477   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:08.737504   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737692   72322 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0906 20:05:08.741940   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:08.756235   72322 kubeadm.go:883] updating cluster {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:05:08.756380   72322 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:05:08.756426   72322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:05:08.798359   72322 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:05:08.798388   72322 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:05:08.798484   72322 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.798507   72322 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.798520   72322 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0906 20:05:08.798559   72322 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.798512   72322 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.798571   72322 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.798494   72322 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.798489   72322 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800044   72322 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.800055   72322 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800048   72322 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0906 20:05:08.800067   72322 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.800070   72322 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.800043   72322 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.800046   72322 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.800050   72322 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.960723   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.967887   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.980496   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.988288   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.990844   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.000220   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.031002   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0906 20:05:09.046388   72322 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0906 20:05:09.046430   72322 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.046471   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.079069   72322 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0906 20:05:09.079112   72322 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.079161   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147423   72322 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0906 20:05:09.147470   72322 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.147521   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147529   72322 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0906 20:05:09.147549   72322 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.147584   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153575   72322 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0906 20:05:09.153612   72322 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.153659   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153662   72322 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0906 20:05:09.153697   72322 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.153736   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.272296   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.272317   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.272325   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.272368   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.272398   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.272474   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.397590   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.398793   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.398807   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.398899   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.398912   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.398969   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.515664   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.529550   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.529604   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.529762   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.532314   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.532385   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.603138   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.654698   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0906 20:05:09.654823   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:09.671020   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0906 20:05:09.671069   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0906 20:05:09.671123   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0906 20:05:09.671156   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:09.671128   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.671208   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:09.686883   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0906 20:05:09.687013   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:09.709594   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0906 20:05:09.709706   72322 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0906 20:05:09.709758   72322 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.709858   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0906 20:05:09.709877   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709868   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.709940   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0906 20:05:09.709906   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709994   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0906 20:05:09.709771   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0906 20:05:09.709973   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0906 20:05:09.709721   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:09.714755   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0906 20:05:12.389459   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.679458658s)
	I0906 20:05:12.389498   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0906 20:05:12.389522   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389524   72322 ssh_runner.go:235] Completed: which crictl: (2.679596804s)
	I0906 20:05:12.389573   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389582   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:10.706506   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.207411   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.707316   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.207239   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.706502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.206560   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.706593   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.207192   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.706940   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:15.207250   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.338753   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.339694   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.839193   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:13.656815   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.156988   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.349906   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.960304583s)
	I0906 20:05:14.349962   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960364149s)
	I0906 20:05:14.349988   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:14.350001   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0906 20:05:14.350032   72322 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.350085   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.397740   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:16.430883   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.03310928s)
	I0906 20:05:16.430943   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 20:05:16.430977   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080869318s)
	I0906 20:05:16.431004   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0906 20:05:16.431042   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:16.431042   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:16.431103   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:18.293255   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.862123731s)
	I0906 20:05:18.293274   72322 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.862211647s)
	I0906 20:05:18.293294   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0906 20:05:18.293315   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0906 20:05:18.293324   72322 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:18.293372   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:15.706728   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.207477   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.707337   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.206710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.707209   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.206544   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.707104   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.206752   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.706561   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:20.206507   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.840176   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.339033   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:18.657074   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.157488   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:19.142756   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 20:05:19.142784   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:19.142824   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:20.494611   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351756729s)
	I0906 20:05:20.494642   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0906 20:05:20.494656   72322 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.494706   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.706855   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.206585   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.706948   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.207150   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.706508   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.207459   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.706894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.206643   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.707208   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:25.206797   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.838561   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:25.838697   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:23.656303   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:26.156813   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:24.186953   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.692203906s)
	I0906 20:05:24.186987   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0906 20:05:24.187019   72322 cache_images.go:123] Successfully loaded all cached images
	I0906 20:05:24.187026   72322 cache_images.go:92] duration metric: took 15.388623154s to LoadCachedImages
	I0906 20:05:24.187040   72322 kubeadm.go:934] updating node { 192.168.61.184 8443 v1.31.0 crio true true} ...
	I0906 20:05:24.187169   72322 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-504385 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:05:24.187251   72322 ssh_runner.go:195] Run: crio config
	I0906 20:05:24.236699   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:24.236722   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:24.236746   72322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:05:24.236770   72322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.184 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-504385 NodeName:no-preload-504385 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:05:24.236943   72322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-504385"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:05:24.237005   72322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:05:24.247480   72322 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:05:24.247554   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:05:24.257088   72322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0906 20:05:24.274447   72322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:05:24.292414   72322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0906 20:05:24.310990   72322 ssh_runner.go:195] Run: grep 192.168.61.184	control-plane.minikube.internal$ /etc/hosts
	I0906 20:05:24.315481   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:24.327268   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:24.465318   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:05:24.482195   72322 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385 for IP: 192.168.61.184
	I0906 20:05:24.482216   72322 certs.go:194] generating shared ca certs ...
	I0906 20:05:24.482230   72322 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:05:24.482364   72322 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:05:24.482407   72322 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:05:24.482420   72322 certs.go:256] generating profile certs ...
	I0906 20:05:24.482522   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/client.key
	I0906 20:05:24.482603   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key.9c78613e
	I0906 20:05:24.482664   72322 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key
	I0906 20:05:24.482828   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:05:24.482878   72322 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:05:24.482894   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:05:24.482927   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:05:24.482956   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:05:24.482992   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:05:24.483043   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:24.483686   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:05:24.528742   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:05:24.561921   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:05:24.596162   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:05:24.636490   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 20:05:24.664450   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:05:24.690551   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:05:24.717308   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:05:24.741498   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:05:24.764388   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:05:24.789473   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:05:24.814772   72322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:05:24.833405   72322 ssh_runner.go:195] Run: openssl version
	I0906 20:05:24.841007   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:05:24.852635   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857351   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857404   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.863435   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:05:24.874059   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:05:24.884939   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889474   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889567   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.895161   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:05:24.905629   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:05:24.916101   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920494   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920550   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.925973   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:05:24.937017   72322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:05:24.941834   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:05:24.947779   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:05:24.954042   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:05:24.959977   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:05:24.965500   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:05:24.970996   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:05:24.976532   72322 kubeadm.go:392] StartCluster: {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:05:24.976606   72322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:05:24.976667   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.015556   72322 cri.go:89] found id: ""
	I0906 20:05:25.015653   72322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:05:25.032921   72322 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:05:25.032954   72322 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:05:25.033009   72322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:05:25.044039   72322 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:05:25.045560   72322 kubeconfig.go:125] found "no-preload-504385" server: "https://192.168.61.184:8443"
	I0906 20:05:25.049085   72322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:05:25.059027   72322 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.184
	I0906 20:05:25.059060   72322 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:05:25.059073   72322 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:05:25.059128   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.096382   72322 cri.go:89] found id: ""
	I0906 20:05:25.096446   72322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:05:25.114296   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:05:25.126150   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:05:25.126168   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:05:25.126207   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:05:25.136896   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:05:25.136964   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:05:25.148074   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:05:25.158968   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:05:25.159027   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:05:25.169642   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.179183   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:05:25.179258   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.189449   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:05:25.199237   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:05:25.199286   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:05:25.209663   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:05:25.220511   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:25.336312   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.475543   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.139195419s)
	I0906 20:05:26.475586   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.700018   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.768678   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.901831   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:05:26.901928   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.401987   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.903023   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.957637   72322 api_server.go:72] duration metric: took 1.055807s to wait for apiserver process to appear ...
	I0906 20:05:27.957664   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:05:27.957684   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:27.958196   72322 api_server.go:269] stopped: https://192.168.61.184:8443/healthz: Get "https://192.168.61.184:8443/healthz": dial tcp 192.168.61.184:8443: connect: connection refused
	I0906 20:05:28.458421   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:25.706669   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.206691   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.707336   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.206666   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.706715   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.206488   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.706489   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.207461   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.707293   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:30.206591   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.840001   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:29.840101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.768451   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:05:30.768482   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:05:30.768505   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.868390   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.868430   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:30.958611   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.964946   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.964977   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.458125   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.462130   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.462155   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.958761   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.963320   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.963347   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:32.458596   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:32.464885   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:05:32.474582   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:05:32.474616   72322 api_server.go:131] duration metric: took 4.51694462s to wait for apiserver health ...
	I0906 20:05:32.474627   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:32.474635   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:32.476583   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:05:28.157326   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.657628   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:32.477797   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:05:32.490715   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:05:32.510816   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:05:32.529192   72322 system_pods.go:59] 8 kube-system pods found
	I0906 20:05:32.529236   72322 system_pods.go:61] "coredns-6f6b679f8f-s7tnx" [ce438653-a3b9-4412-8705-7d2db7df5d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:05:32.529254   72322 system_pods.go:61] "etcd-no-preload-504385" [6ec6b2a1-c22a-44b4-b726-808a56f2be2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:05:32.529266   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [5f2baa0b-3cf3-4e0d-984b-80fa19adb3b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:05:32.529275   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [59ffbd51-6a83-43e6-8ef7-bc1cfd80b4d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:05:32.529292   72322 system_pods.go:61] "kube-proxy-dg8sg" [2e0393f3-b9bd-4603-b800-e1a2fdbf71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:05:32.529300   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [52a74c91-a6ec-4d64-8651-e1f87db21b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:05:32.529306   72322 system_pods.go:61] "metrics-server-6867b74b74-nn295" [9d0f51d1-7abf-4f63-bef7-c02f6cd89c5d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:05:32.529313   72322 system_pods.go:61] "storage-provisioner" [69ed0066-2b84-4a4d-91e5-1e25bb3f31eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:05:32.529320   72322 system_pods.go:74] duration metric: took 18.48107ms to wait for pod list to return data ...
	I0906 20:05:32.529333   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:05:32.535331   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:05:32.535363   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:05:32.535376   72322 node_conditions.go:105] duration metric: took 6.037772ms to run NodePressure ...
	I0906 20:05:32.535397   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:32.955327   72322 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962739   72322 kubeadm.go:739] kubelet initialised
	I0906 20:05:32.962767   72322 kubeadm.go:740] duration metric: took 7.415054ms waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962776   72322 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:05:32.980280   72322 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:30.707091   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.207070   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.707224   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.207295   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.707195   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.207373   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.707519   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.207428   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.706808   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:35.207396   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.340006   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.838636   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:36.838703   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:33.155769   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.156761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.994689   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.487610   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.707415   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.206955   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.706868   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.206515   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.706659   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.206735   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.706915   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.207300   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.707211   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:40.207085   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.839362   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:41.338875   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.657190   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.158940   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:39.986557   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.486518   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.706720   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.206896   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.707281   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.206751   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.706754   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.206987   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.707245   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.207502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.707112   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:45.206569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.339353   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.838975   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.657187   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.156196   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:47.157014   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:43.986675   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.986701   72322 pod_ready.go:82] duration metric: took 11.006397745s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.986710   72322 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991650   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.991671   72322 pod_ready.go:82] duration metric: took 4.955425ms for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991680   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997218   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:44.997242   72322 pod_ready.go:82] duration metric: took 1.005553613s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997253   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002155   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.002177   72322 pod_ready.go:82] duration metric: took 4.916677ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002186   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006610   72322 pod_ready.go:93] pod "kube-proxy-dg8sg" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.006631   72322 pod_ready.go:82] duration metric: took 4.439092ms for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006639   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185114   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.185139   72322 pod_ready.go:82] duration metric: took 178.494249ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185149   72322 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:47.191676   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.707450   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.207446   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.707006   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.206484   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.707168   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.207536   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.707554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.206894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.706709   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:50.206799   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.338355   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.839372   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.157301   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.157426   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.193619   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.692286   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.707012   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.206914   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.706917   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.207465   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.706682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.206565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.706757   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.206600   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.706926   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:55.207382   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.338845   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.339570   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:53.656904   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.158806   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:54.191331   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.192498   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.707103   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.206621   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.707156   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.207277   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.706568   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:58.206599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:05:58.206698   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:05:58.245828   73230 cri.go:89] found id: ""
	I0906 20:05:58.245857   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.245868   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:05:58.245875   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:05:58.245938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:05:58.283189   73230 cri.go:89] found id: ""
	I0906 20:05:58.283217   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.283228   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:05:58.283235   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:05:58.283303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:05:58.320834   73230 cri.go:89] found id: ""
	I0906 20:05:58.320868   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.320880   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:05:58.320889   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:05:58.320944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:05:58.356126   73230 cri.go:89] found id: ""
	I0906 20:05:58.356152   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.356162   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:05:58.356169   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:05:58.356227   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:05:58.395951   73230 cri.go:89] found id: ""
	I0906 20:05:58.395977   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.395987   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:05:58.395994   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:05:58.396061   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:05:58.431389   73230 cri.go:89] found id: ""
	I0906 20:05:58.431415   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.431426   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:05:58.431433   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:05:58.431511   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:05:58.466255   73230 cri.go:89] found id: ""
	I0906 20:05:58.466285   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.466294   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:05:58.466300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:05:58.466356   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:05:58.505963   73230 cri.go:89] found id: ""
	I0906 20:05:58.505989   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.505997   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:05:58.506006   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:05:58.506018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:05:58.579027   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:05:58.579061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:05:58.620332   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:05:58.620365   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:05:58.675017   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:05:58.675052   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:05:58.689944   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:05:58.689970   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:05:58.825396   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:05:57.838610   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.339329   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.656312   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.656996   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.691099   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.692040   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.192516   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:01.326375   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:01.340508   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:01.340570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:01.375429   73230 cri.go:89] found id: ""
	I0906 20:06:01.375460   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.375470   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:01.375478   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:01.375539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:01.410981   73230 cri.go:89] found id: ""
	I0906 20:06:01.411008   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.411019   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:01.411026   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:01.411083   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:01.448925   73230 cri.go:89] found id: ""
	I0906 20:06:01.448957   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.448968   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:01.448975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:01.449040   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:01.492063   73230 cri.go:89] found id: ""
	I0906 20:06:01.492094   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.492104   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:01.492112   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:01.492181   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:01.557779   73230 cri.go:89] found id: ""
	I0906 20:06:01.557812   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.557823   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:01.557830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:01.557892   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:01.604397   73230 cri.go:89] found id: ""
	I0906 20:06:01.604424   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.604432   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:01.604437   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:01.604482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:01.642249   73230 cri.go:89] found id: ""
	I0906 20:06:01.642280   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.642292   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:01.642300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:01.642364   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:01.692434   73230 cri.go:89] found id: ""
	I0906 20:06:01.692462   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.692474   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:01.692483   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:01.692498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:01.705860   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:01.705884   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:01.783929   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:01.783954   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:01.783965   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:01.864347   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:01.864385   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:01.902284   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:01.902311   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:04.456090   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:04.469775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:04.469840   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:04.505742   73230 cri.go:89] found id: ""
	I0906 20:06:04.505769   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.505778   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:04.505783   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:04.505835   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:04.541787   73230 cri.go:89] found id: ""
	I0906 20:06:04.541811   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.541819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:04.541824   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:04.541874   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:04.578775   73230 cri.go:89] found id: ""
	I0906 20:06:04.578806   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.578817   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:04.578825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:04.578885   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:04.614505   73230 cri.go:89] found id: ""
	I0906 20:06:04.614533   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.614542   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:04.614548   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:04.614594   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:04.652988   73230 cri.go:89] found id: ""
	I0906 20:06:04.653016   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.653027   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:04.653035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:04.653104   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:04.692380   73230 cri.go:89] found id: ""
	I0906 20:06:04.692408   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.692416   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:04.692423   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:04.692478   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:04.729846   73230 cri.go:89] found id: ""
	I0906 20:06:04.729869   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.729880   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:04.729887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:04.729953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:04.766341   73230 cri.go:89] found id: ""
	I0906 20:06:04.766370   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.766379   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:04.766390   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:04.766405   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:04.779801   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:04.779828   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:04.855313   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:04.855334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:04.855346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:04.934210   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:04.934246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:04.975589   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:04.975621   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:02.839427   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:04.840404   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.158048   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.655510   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.192558   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.692755   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.528622   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:07.544085   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:07.544156   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:07.588106   73230 cri.go:89] found id: ""
	I0906 20:06:07.588139   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.588149   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:07.588157   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:07.588210   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:07.630440   73230 cri.go:89] found id: ""
	I0906 20:06:07.630476   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.630494   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:07.630500   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:07.630551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:07.668826   73230 cri.go:89] found id: ""
	I0906 20:06:07.668870   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.668889   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:07.668898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:07.668962   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:07.706091   73230 cri.go:89] found id: ""
	I0906 20:06:07.706118   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.706130   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:07.706138   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:07.706196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:07.741679   73230 cri.go:89] found id: ""
	I0906 20:06:07.741708   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.741719   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:07.741726   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:07.741792   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:07.778240   73230 cri.go:89] found id: ""
	I0906 20:06:07.778277   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.778288   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:07.778296   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:07.778352   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:07.813183   73230 cri.go:89] found id: ""
	I0906 20:06:07.813212   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.813224   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:07.813232   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:07.813294   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:07.853938   73230 cri.go:89] found id: ""
	I0906 20:06:07.853970   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.853980   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:07.853988   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:07.854001   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:07.893540   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:07.893567   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:07.944219   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:07.944262   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:07.959601   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:07.959635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:08.034487   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:08.034513   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:08.034529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:07.339634   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:09.838953   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.658315   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.157980   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.192738   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.691823   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.611413   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:10.625273   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:10.625353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:10.664568   73230 cri.go:89] found id: ""
	I0906 20:06:10.664597   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.664609   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:10.664617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:10.664680   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:10.702743   73230 cri.go:89] found id: ""
	I0906 20:06:10.702772   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.702783   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:10.702790   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:10.702850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:10.739462   73230 cri.go:89] found id: ""
	I0906 20:06:10.739487   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.739504   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:10.739511   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:10.739572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:10.776316   73230 cri.go:89] found id: ""
	I0906 20:06:10.776344   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.776355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:10.776362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:10.776420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:10.809407   73230 cri.go:89] found id: ""
	I0906 20:06:10.809440   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.809451   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:10.809459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:10.809519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:10.844736   73230 cri.go:89] found id: ""
	I0906 20:06:10.844765   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.844777   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:10.844784   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:10.844851   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:10.880658   73230 cri.go:89] found id: ""
	I0906 20:06:10.880685   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.880693   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:10.880698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:10.880753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:10.917032   73230 cri.go:89] found id: ""
	I0906 20:06:10.917063   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.917074   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:10.917085   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:10.917100   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:10.980241   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:10.980272   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:10.995389   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:10.995435   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:11.070285   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:11.070313   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:11.070328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:11.155574   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:11.155607   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:13.703712   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:13.718035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:13.718093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:13.753578   73230 cri.go:89] found id: ""
	I0906 20:06:13.753603   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.753611   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:13.753617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:13.753659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:13.790652   73230 cri.go:89] found id: ""
	I0906 20:06:13.790681   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.790691   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:13.790697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:13.790749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:13.824243   73230 cri.go:89] found id: ""
	I0906 20:06:13.824278   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.824288   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:13.824293   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:13.824342   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:13.859647   73230 cri.go:89] found id: ""
	I0906 20:06:13.859691   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.859702   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:13.859721   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:13.859781   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:13.897026   73230 cri.go:89] found id: ""
	I0906 20:06:13.897061   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.897068   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:13.897075   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:13.897131   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:13.933904   73230 cri.go:89] found id: ""
	I0906 20:06:13.933927   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.933935   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:13.933941   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:13.933986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:13.969168   73230 cri.go:89] found id: ""
	I0906 20:06:13.969198   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.969210   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:13.969218   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:13.969295   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:14.005808   73230 cri.go:89] found id: ""
	I0906 20:06:14.005838   73230 logs.go:276] 0 containers: []
	W0906 20:06:14.005849   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:14.005862   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:14.005878   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:14.060878   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:14.060915   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:14.075388   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:14.075414   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:14.144942   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:14.144966   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:14.144981   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:14.233088   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:14.233139   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:12.338579   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.839062   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.655992   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.657020   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.157119   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.692103   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.193196   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:16.776744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:16.790292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:16.790384   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:16.828877   73230 cri.go:89] found id: ""
	I0906 20:06:16.828910   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.828921   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:16.828929   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:16.829016   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:16.864413   73230 cri.go:89] found id: ""
	I0906 20:06:16.864440   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.864449   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:16.864455   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:16.864525   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:16.908642   73230 cri.go:89] found id: ""
	I0906 20:06:16.908676   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.908687   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:16.908694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:16.908748   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:16.952247   73230 cri.go:89] found id: ""
	I0906 20:06:16.952278   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.952286   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:16.952292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:16.952343   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:16.990986   73230 cri.go:89] found id: ""
	I0906 20:06:16.991013   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.991022   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:16.991028   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:16.991077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:17.031002   73230 cri.go:89] found id: ""
	I0906 20:06:17.031034   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.031045   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:17.031052   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:17.031114   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:17.077533   73230 cri.go:89] found id: ""
	I0906 20:06:17.077560   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.077572   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:17.077579   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:17.077646   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:17.116770   73230 cri.go:89] found id: ""
	I0906 20:06:17.116798   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.116806   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:17.116817   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:17.116834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.169300   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:17.169337   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:17.184266   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:17.184299   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:17.266371   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:17.266400   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:17.266419   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:17.343669   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:17.343698   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:19.886541   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:19.899891   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:19.899951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:19.946592   73230 cri.go:89] found id: ""
	I0906 20:06:19.946621   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.946630   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:19.946636   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:19.946686   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:19.981758   73230 cri.go:89] found id: ""
	I0906 20:06:19.981788   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.981797   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:19.981802   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:19.981854   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:20.018372   73230 cri.go:89] found id: ""
	I0906 20:06:20.018397   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.018405   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:20.018411   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:20.018460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:20.054380   73230 cri.go:89] found id: ""
	I0906 20:06:20.054428   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.054440   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:20.054449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:20.054521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:20.092343   73230 cri.go:89] found id: ""
	I0906 20:06:20.092376   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.092387   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:20.092395   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:20.092463   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:20.128568   73230 cri.go:89] found id: ""
	I0906 20:06:20.128594   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.128604   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:20.128610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:20.128657   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:20.166018   73230 cri.go:89] found id: ""
	I0906 20:06:20.166046   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.166057   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:20.166072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:20.166125   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:20.203319   73230 cri.go:89] found id: ""
	I0906 20:06:20.203347   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.203355   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:20.203365   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:20.203381   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:20.287217   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:20.287243   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:20.287259   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:20.372799   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:20.372834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:20.416595   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:20.416620   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.338546   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.342409   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.838689   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.657411   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:22.157972   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.691327   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.692066   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:20.468340   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:20.468378   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:22.983259   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:22.997014   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:22.997098   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:23.034483   73230 cri.go:89] found id: ""
	I0906 20:06:23.034513   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.034524   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:23.034531   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:23.034597   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:23.072829   73230 cri.go:89] found id: ""
	I0906 20:06:23.072867   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.072878   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:23.072885   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:23.072949   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:23.110574   73230 cri.go:89] found id: ""
	I0906 20:06:23.110602   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.110613   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:23.110620   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:23.110684   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:23.149506   73230 cri.go:89] found id: ""
	I0906 20:06:23.149538   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.149550   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:23.149557   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:23.149619   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:23.191321   73230 cri.go:89] found id: ""
	I0906 20:06:23.191355   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.191367   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:23.191374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:23.191441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:23.233737   73230 cri.go:89] found id: ""
	I0906 20:06:23.233770   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.233791   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:23.233800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:23.233873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:23.270013   73230 cri.go:89] found id: ""
	I0906 20:06:23.270048   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.270060   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:23.270068   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:23.270127   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:23.309517   73230 cri.go:89] found id: ""
	I0906 20:06:23.309541   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.309549   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:23.309566   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:23.309578   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:23.380645   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:23.380675   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:23.380690   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:23.463656   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:23.463696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:23.504100   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:23.504134   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:23.557438   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:23.557483   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:23.841101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.340722   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.658261   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:27.155171   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.193829   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.690602   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.074045   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:26.088006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:26.088072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:26.124445   73230 cri.go:89] found id: ""
	I0906 20:06:26.124469   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.124476   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:26.124482   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:26.124537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:26.158931   73230 cri.go:89] found id: ""
	I0906 20:06:26.158957   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.158968   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:26.158975   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:26.159035   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:26.197125   73230 cri.go:89] found id: ""
	I0906 20:06:26.197154   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.197164   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:26.197171   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:26.197234   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:26.233241   73230 cri.go:89] found id: ""
	I0906 20:06:26.233278   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.233291   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:26.233300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:26.233366   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:26.269910   73230 cri.go:89] found id: ""
	I0906 20:06:26.269943   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.269955   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:26.269962   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:26.270026   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:26.308406   73230 cri.go:89] found id: ""
	I0906 20:06:26.308439   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.308450   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:26.308459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:26.308521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:26.344248   73230 cri.go:89] found id: ""
	I0906 20:06:26.344276   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.344288   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:26.344295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:26.344353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:26.391794   73230 cri.go:89] found id: ""
	I0906 20:06:26.391827   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.391840   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:26.391851   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:26.391866   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:26.444192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:26.444231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:26.459113   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:26.459144   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:26.533920   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:26.533945   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:26.533960   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:26.616382   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:26.616416   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:29.160429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:29.175007   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:29.175063   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:29.212929   73230 cri.go:89] found id: ""
	I0906 20:06:29.212961   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.212972   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:29.212980   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:29.213042   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:29.250777   73230 cri.go:89] found id: ""
	I0906 20:06:29.250806   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.250815   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:29.250821   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:29.250870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:29.292222   73230 cri.go:89] found id: ""
	I0906 20:06:29.292253   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.292262   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:29.292268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:29.292331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:29.328379   73230 cri.go:89] found id: ""
	I0906 20:06:29.328413   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.328431   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:29.328436   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:29.328482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:29.366792   73230 cri.go:89] found id: ""
	I0906 20:06:29.366822   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.366834   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:29.366841   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:29.366903   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:29.402233   73230 cri.go:89] found id: ""
	I0906 20:06:29.402261   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.402270   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:29.402276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:29.402331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:29.436695   73230 cri.go:89] found id: ""
	I0906 20:06:29.436724   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.436731   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:29.436736   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:29.436787   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:29.473050   73230 cri.go:89] found id: ""
	I0906 20:06:29.473074   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.473082   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:29.473091   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:29.473101   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:29.524981   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:29.525018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:29.538698   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:29.538722   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:29.611026   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:29.611049   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:29.611064   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:29.686898   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:29.686931   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:28.839118   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:30.839532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:29.156985   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.656552   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:28.694188   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.191032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.192623   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:32.228399   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:32.244709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:32.244775   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:32.285681   73230 cri.go:89] found id: ""
	I0906 20:06:32.285713   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.285724   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:32.285732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:32.285794   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:32.325312   73230 cri.go:89] found id: ""
	I0906 20:06:32.325340   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.325349   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:32.325355   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:32.325400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:32.361420   73230 cri.go:89] found id: ""
	I0906 20:06:32.361455   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.361468   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:32.361477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:32.361543   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:32.398881   73230 cri.go:89] found id: ""
	I0906 20:06:32.398956   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.398971   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:32.398984   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:32.399041   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:32.435336   73230 cri.go:89] found id: ""
	I0906 20:06:32.435362   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.435370   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:32.435375   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:32.435427   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:32.472849   73230 cri.go:89] found id: ""
	I0906 20:06:32.472900   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.472909   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:32.472914   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:32.472964   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:32.508176   73230 cri.go:89] found id: ""
	I0906 20:06:32.508199   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.508208   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:32.508213   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:32.508271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:32.550519   73230 cri.go:89] found id: ""
	I0906 20:06:32.550550   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.550561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:32.550576   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:32.550593   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:32.601362   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:32.601394   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:32.614821   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:32.614849   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:32.686044   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:32.686061   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:32.686074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:32.767706   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:32.767744   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:35.309159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:35.322386   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:35.322462   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:35.362909   73230 cri.go:89] found id: ""
	I0906 20:06:35.362937   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.362948   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:35.362955   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:35.363017   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:35.400591   73230 cri.go:89] found id: ""
	I0906 20:06:35.400621   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.400629   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:35.400635   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:35.400682   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:35.436547   73230 cri.go:89] found id: ""
	I0906 20:06:35.436578   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.436589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:35.436596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:35.436666   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:33.338812   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.340154   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.656782   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.657043   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.691312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:37.691358   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.473130   73230 cri.go:89] found id: ""
	I0906 20:06:35.473155   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.473163   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:35.473168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:35.473244   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:35.509646   73230 cri.go:89] found id: ""
	I0906 20:06:35.509677   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.509687   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:35.509695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:35.509754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:35.547651   73230 cri.go:89] found id: ""
	I0906 20:06:35.547684   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.547696   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:35.547703   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:35.547761   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:35.608590   73230 cri.go:89] found id: ""
	I0906 20:06:35.608614   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.608624   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:35.608631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:35.608691   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:35.651508   73230 cri.go:89] found id: ""
	I0906 20:06:35.651550   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.651561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:35.651572   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:35.651585   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:35.705502   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:35.705542   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:35.719550   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:35.719577   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:35.791435   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:35.791461   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:35.791476   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:35.869018   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:35.869070   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:38.411587   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:38.425739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:38.425800   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:38.463534   73230 cri.go:89] found id: ""
	I0906 20:06:38.463560   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.463571   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:38.463578   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:38.463628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:38.499238   73230 cri.go:89] found id: ""
	I0906 20:06:38.499269   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.499280   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:38.499287   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:38.499340   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:38.536297   73230 cri.go:89] found id: ""
	I0906 20:06:38.536334   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.536345   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:38.536352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:38.536417   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:38.573672   73230 cri.go:89] found id: ""
	I0906 20:06:38.573701   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.573712   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:38.573720   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:38.573779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:38.610913   73230 cri.go:89] found id: ""
	I0906 20:06:38.610937   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.610945   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:38.610950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:38.610996   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:38.647335   73230 cri.go:89] found id: ""
	I0906 20:06:38.647359   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.647368   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:38.647374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:38.647418   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:38.684054   73230 cri.go:89] found id: ""
	I0906 20:06:38.684084   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.684097   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:38.684106   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:38.684174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:38.731134   73230 cri.go:89] found id: ""
	I0906 20:06:38.731161   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.731173   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:38.731183   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:38.731199   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:38.787757   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:38.787798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:38.802920   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:38.802955   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:38.889219   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:38.889246   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:38.889261   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:38.964999   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:38.965042   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:37.838886   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.338914   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:38.156615   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.656577   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:39.691609   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.692330   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.504406   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:41.518111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:41.518169   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:41.558701   73230 cri.go:89] found id: ""
	I0906 20:06:41.558727   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.558738   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:41.558746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:41.558807   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:41.595986   73230 cri.go:89] found id: ""
	I0906 20:06:41.596009   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.596017   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:41.596023   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:41.596070   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:41.631462   73230 cri.go:89] found id: ""
	I0906 20:06:41.631486   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.631494   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:41.631504   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:41.631559   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:41.669646   73230 cri.go:89] found id: ""
	I0906 20:06:41.669674   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.669686   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:41.669693   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:41.669754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:41.708359   73230 cri.go:89] found id: ""
	I0906 20:06:41.708383   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.708391   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:41.708398   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:41.708446   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:41.745712   73230 cri.go:89] found id: ""
	I0906 20:06:41.745737   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.745750   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:41.745756   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:41.745804   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:41.781862   73230 cri.go:89] found id: ""
	I0906 20:06:41.781883   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.781892   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:41.781898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:41.781946   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:41.816687   73230 cri.go:89] found id: ""
	I0906 20:06:41.816714   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.816722   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:41.816730   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:41.816742   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:41.830115   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:41.830145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:41.908303   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:41.908334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:41.908348   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:42.001459   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:42.001501   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:42.061341   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:42.061368   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:44.619574   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:44.633355   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:44.633423   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:44.668802   73230 cri.go:89] found id: ""
	I0906 20:06:44.668834   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.668845   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:44.668852   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:44.668924   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:44.707613   73230 cri.go:89] found id: ""
	I0906 20:06:44.707639   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.707650   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:44.707657   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:44.707727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:44.744202   73230 cri.go:89] found id: ""
	I0906 20:06:44.744231   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.744243   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:44.744250   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:44.744311   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:44.783850   73230 cri.go:89] found id: ""
	I0906 20:06:44.783873   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.783881   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:44.783886   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:44.783938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:44.824986   73230 cri.go:89] found id: ""
	I0906 20:06:44.825011   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.825019   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:44.825025   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:44.825073   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:44.865157   73230 cri.go:89] found id: ""
	I0906 20:06:44.865182   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.865190   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:44.865196   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:44.865258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:44.908268   73230 cri.go:89] found id: ""
	I0906 20:06:44.908295   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.908305   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:44.908312   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:44.908359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:44.948669   73230 cri.go:89] found id: ""
	I0906 20:06:44.948697   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.948706   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:44.948716   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:44.948731   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:44.961862   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:44.961887   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:45.036756   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:45.036783   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:45.036801   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:45.116679   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:45.116717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:45.159756   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:45.159784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:42.339271   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.839443   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:43.155878   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:45.158884   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.192211   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:46.692140   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.714682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:47.730754   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:47.730820   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:47.783208   73230 cri.go:89] found id: ""
	I0906 20:06:47.783239   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.783249   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:47.783255   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:47.783312   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:47.844291   73230 cri.go:89] found id: ""
	I0906 20:06:47.844324   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.844336   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:47.844344   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:47.844407   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:47.881877   73230 cri.go:89] found id: ""
	I0906 20:06:47.881905   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.881913   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:47.881919   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:47.881986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:47.918034   73230 cri.go:89] found id: ""
	I0906 20:06:47.918058   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.918066   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:47.918072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:47.918126   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:47.957045   73230 cri.go:89] found id: ""
	I0906 20:06:47.957068   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.957077   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:47.957083   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:47.957134   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:47.993849   73230 cri.go:89] found id: ""
	I0906 20:06:47.993872   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.993883   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:47.993890   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:47.993951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:48.031214   73230 cri.go:89] found id: ""
	I0906 20:06:48.031239   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.031249   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:48.031257   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:48.031314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:48.064634   73230 cri.go:89] found id: ""
	I0906 20:06:48.064673   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.064690   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:48.064698   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:48.064710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:48.104307   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:48.104343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:48.158869   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:48.158900   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:48.173000   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:48.173026   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:48.248751   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:48.248774   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:48.248792   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:47.339014   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.339656   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.838817   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.656402   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.156349   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:52.156651   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.192411   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.691635   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.833490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:50.847618   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:50.847702   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:50.887141   73230 cri.go:89] found id: ""
	I0906 20:06:50.887167   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.887176   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:50.887181   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:50.887228   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:50.923435   73230 cri.go:89] found id: ""
	I0906 20:06:50.923480   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.923491   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:50.923499   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:50.923567   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:50.959704   73230 cri.go:89] found id: ""
	I0906 20:06:50.959730   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.959742   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:50.959748   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:50.959810   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:50.992994   73230 cri.go:89] found id: ""
	I0906 20:06:50.993023   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.993032   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:50.993037   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:50.993091   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:51.031297   73230 cri.go:89] found id: ""
	I0906 20:06:51.031321   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.031329   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:51.031335   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:51.031390   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:51.067698   73230 cri.go:89] found id: ""
	I0906 20:06:51.067721   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.067732   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:51.067739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:51.067799   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:51.102240   73230 cri.go:89] found id: ""
	I0906 20:06:51.102268   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.102278   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:51.102285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:51.102346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:51.137146   73230 cri.go:89] found id: ""
	I0906 20:06:51.137172   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.137183   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:51.137194   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:51.137209   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:51.216158   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:51.216194   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:51.256063   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:51.256088   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:51.309176   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:51.309210   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:51.323515   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:51.323544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:51.393281   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:53.893714   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:53.907807   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:53.907863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:53.947929   73230 cri.go:89] found id: ""
	I0906 20:06:53.947954   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.947962   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:53.947968   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:53.948014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:53.983005   73230 cri.go:89] found id: ""
	I0906 20:06:53.983028   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.983041   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:53.983046   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:53.983094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:54.019004   73230 cri.go:89] found id: ""
	I0906 20:06:54.019027   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.019035   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:54.019041   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:54.019094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:54.060240   73230 cri.go:89] found id: ""
	I0906 20:06:54.060266   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.060279   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:54.060285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:54.060336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:54.096432   73230 cri.go:89] found id: ""
	I0906 20:06:54.096461   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.096469   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:54.096475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:54.096537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:54.132992   73230 cri.go:89] found id: ""
	I0906 20:06:54.133021   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.133033   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:54.133040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:54.133103   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:54.172730   73230 cri.go:89] found id: ""
	I0906 20:06:54.172754   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.172766   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:54.172778   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:54.172839   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:54.212050   73230 cri.go:89] found id: ""
	I0906 20:06:54.212191   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.212202   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:54.212212   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:54.212234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:54.263603   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:54.263647   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:54.281291   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:54.281324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:54.359523   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:54.359545   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:54.359568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:54.442230   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:54.442265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:54.339159   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.841459   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.157379   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.656134   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.191878   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.691766   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.983744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:56.997451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:56.997527   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:57.034792   73230 cri.go:89] found id: ""
	I0906 20:06:57.034817   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.034825   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:57.034831   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:57.034883   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:57.073709   73230 cri.go:89] found id: ""
	I0906 20:06:57.073735   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.073745   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:57.073751   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:57.073803   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:57.122758   73230 cri.go:89] found id: ""
	I0906 20:06:57.122787   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.122798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:57.122808   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:57.122865   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:57.158208   73230 cri.go:89] found id: ""
	I0906 20:06:57.158242   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.158252   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:57.158262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:57.158323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:57.194004   73230 cri.go:89] found id: ""
	I0906 20:06:57.194029   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.194037   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:57.194044   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:57.194099   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:57.230068   73230 cri.go:89] found id: ""
	I0906 20:06:57.230099   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.230111   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:57.230119   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:57.230186   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:57.265679   73230 cri.go:89] found id: ""
	I0906 20:06:57.265707   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.265718   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:57.265735   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:57.265801   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:57.304917   73230 cri.go:89] found id: ""
	I0906 20:06:57.304946   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.304956   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:57.304967   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:57.304980   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:57.357238   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:57.357276   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:57.371648   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:57.371674   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:57.438572   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:57.438590   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:57.438602   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:57.528212   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:57.528256   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:00.071140   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:00.084975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:00.085055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:00.119680   73230 cri.go:89] found id: ""
	I0906 20:07:00.119713   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.119725   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:00.119732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:00.119786   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:00.155678   73230 cri.go:89] found id: ""
	I0906 20:07:00.155704   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.155716   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:00.155723   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:00.155769   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:00.190758   73230 cri.go:89] found id: ""
	I0906 20:07:00.190783   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.190793   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:00.190799   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:00.190863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:00.228968   73230 cri.go:89] found id: ""
	I0906 20:07:00.228999   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.229010   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:00.229018   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:00.229079   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:00.265691   73230 cri.go:89] found id: ""
	I0906 20:07:00.265722   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.265733   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:00.265741   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:00.265806   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:00.305785   73230 cri.go:89] found id: ""
	I0906 20:07:00.305812   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.305820   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:00.305825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:00.305872   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:00.341872   73230 cri.go:89] found id: ""
	I0906 20:07:00.341895   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.341902   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:00.341907   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:00.341955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:00.377661   73230 cri.go:89] found id: ""
	I0906 20:07:00.377690   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.377702   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:00.377712   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:00.377725   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:00.428215   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:00.428254   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:00.443135   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:00.443165   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 20:06:59.337996   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.338924   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:58.657236   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.156973   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:59.191556   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.192082   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.193511   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	W0906 20:07:00.518745   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:00.518768   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:00.518781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:00.604413   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:00.604448   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.146657   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:03.160610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:03.160665   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:03.200916   73230 cri.go:89] found id: ""
	I0906 20:07:03.200950   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.200960   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:03.200967   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:03.201029   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:03.239550   73230 cri.go:89] found id: ""
	I0906 20:07:03.239579   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.239592   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:03.239600   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:03.239660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:03.278216   73230 cri.go:89] found id: ""
	I0906 20:07:03.278244   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.278255   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:03.278263   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:03.278325   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:03.315028   73230 cri.go:89] found id: ""
	I0906 20:07:03.315059   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.315073   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:03.315080   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:03.315146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:03.354614   73230 cri.go:89] found id: ""
	I0906 20:07:03.354638   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.354647   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:03.354652   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:03.354710   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:03.390105   73230 cri.go:89] found id: ""
	I0906 20:07:03.390129   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.390138   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:03.390144   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:03.390190   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:03.427651   73230 cri.go:89] found id: ""
	I0906 20:07:03.427679   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.427687   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:03.427695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:03.427763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:03.463191   73230 cri.go:89] found id: ""
	I0906 20:07:03.463220   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.463230   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:03.463242   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:03.463288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:03.476966   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:03.476995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:03.558415   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:03.558441   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:03.558457   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:03.641528   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:03.641564   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.680916   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:03.680943   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:03.339511   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.340113   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.157907   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.160507   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.692151   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:08.191782   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:06.235947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:06.249589   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:06.249667   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:06.289193   73230 cri.go:89] found id: ""
	I0906 20:07:06.289223   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.289235   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:06.289242   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:06.289305   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:06.324847   73230 cri.go:89] found id: ""
	I0906 20:07:06.324887   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.324898   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:06.324904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:06.324966   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:06.361755   73230 cri.go:89] found id: ""
	I0906 20:07:06.361786   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.361798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:06.361806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:06.361873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:06.397739   73230 cri.go:89] found id: ""
	I0906 20:07:06.397766   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.397775   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:06.397780   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:06.397833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:06.432614   73230 cri.go:89] found id: ""
	I0906 20:07:06.432641   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.432649   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:06.432655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:06.432703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:06.467784   73230 cri.go:89] found id: ""
	I0906 20:07:06.467812   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.467823   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:06.467830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:06.467890   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:06.507055   73230 cri.go:89] found id: ""
	I0906 20:07:06.507085   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.507096   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:06.507104   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:06.507165   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:06.544688   73230 cri.go:89] found id: ""
	I0906 20:07:06.544720   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.544730   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:06.544740   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:06.544751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:06.597281   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:06.597314   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:06.612749   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:06.612774   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:06.684973   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:06.684993   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:06.685006   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:06.764306   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:06.764345   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.304340   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:09.317460   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:09.317536   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:09.354289   73230 cri.go:89] found id: ""
	I0906 20:07:09.354312   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.354322   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:09.354327   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:09.354373   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:09.390962   73230 cri.go:89] found id: ""
	I0906 20:07:09.390997   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.391008   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:09.391015   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:09.391076   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:09.427456   73230 cri.go:89] found id: ""
	I0906 20:07:09.427491   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.427502   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:09.427510   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:09.427572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:09.462635   73230 cri.go:89] found id: ""
	I0906 20:07:09.462667   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.462680   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:09.462687   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:09.462749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:09.506726   73230 cri.go:89] found id: ""
	I0906 20:07:09.506751   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.506767   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:09.506775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:09.506836   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:09.541974   73230 cri.go:89] found id: ""
	I0906 20:07:09.541999   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.542009   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:09.542017   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:09.542077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:09.580069   73230 cri.go:89] found id: ""
	I0906 20:07:09.580104   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.580115   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:09.580123   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:09.580182   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:09.616025   73230 cri.go:89] found id: ""
	I0906 20:07:09.616054   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.616065   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:09.616075   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:09.616090   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:09.630967   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:09.630993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:09.716733   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:09.716766   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:09.716782   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:09.792471   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:09.792503   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.832326   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:09.832357   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:07.840909   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.339239   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:07.655710   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:09.656069   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:11.656458   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.192155   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.192716   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.385565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:12.398694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:12.398768   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:12.437446   73230 cri.go:89] found id: ""
	I0906 20:07:12.437473   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.437482   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:12.437487   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:12.437555   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:12.473328   73230 cri.go:89] found id: ""
	I0906 20:07:12.473355   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.473362   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:12.473372   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:12.473429   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:12.510935   73230 cri.go:89] found id: ""
	I0906 20:07:12.510962   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.510972   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:12.510979   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:12.511044   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:12.547961   73230 cri.go:89] found id: ""
	I0906 20:07:12.547991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.547999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:12.548005   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:12.548062   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:12.585257   73230 cri.go:89] found id: ""
	I0906 20:07:12.585291   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.585302   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:12.585309   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:12.585369   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:12.623959   73230 cri.go:89] found id: ""
	I0906 20:07:12.623991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.624003   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:12.624010   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:12.624066   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:12.662795   73230 cri.go:89] found id: ""
	I0906 20:07:12.662822   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.662832   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:12.662840   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:12.662896   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:12.700941   73230 cri.go:89] found id: ""
	I0906 20:07:12.700967   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.700974   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:12.700983   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:12.700994   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:12.785989   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:12.786025   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:12.826678   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:12.826704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:12.881558   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:12.881599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:12.896035   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:12.896065   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:12.970721   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:12.839031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.339615   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:13.656809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.657470   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:14.691032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:16.692697   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.471171   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:15.484466   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:15.484541   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:15.518848   73230 cri.go:89] found id: ""
	I0906 20:07:15.518875   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.518886   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:15.518894   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:15.518953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:15.553444   73230 cri.go:89] found id: ""
	I0906 20:07:15.553468   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.553476   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:15.553482   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:15.553528   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:15.589136   73230 cri.go:89] found id: ""
	I0906 20:07:15.589160   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.589168   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:15.589173   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:15.589220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:15.624410   73230 cri.go:89] found id: ""
	I0906 20:07:15.624434   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.624443   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:15.624449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:15.624492   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:15.661506   73230 cri.go:89] found id: ""
	I0906 20:07:15.661535   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.661547   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:15.661555   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:15.661615   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:15.699126   73230 cri.go:89] found id: ""
	I0906 20:07:15.699148   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.699155   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:15.699161   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:15.699207   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:15.736489   73230 cri.go:89] found id: ""
	I0906 20:07:15.736523   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.736534   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:15.736542   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:15.736604   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:15.771988   73230 cri.go:89] found id: ""
	I0906 20:07:15.772013   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.772020   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:15.772029   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:15.772045   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:15.822734   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:15.822765   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:15.836820   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:15.836872   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:15.915073   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:15.915111   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:15.915126   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:15.988476   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:15.988514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:18.528710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:18.541450   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:18.541526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:18.581278   73230 cri.go:89] found id: ""
	I0906 20:07:18.581308   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.581317   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:18.581323   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:18.581381   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:18.616819   73230 cri.go:89] found id: ""
	I0906 20:07:18.616843   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.616850   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:18.616871   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:18.616923   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:18.655802   73230 cri.go:89] found id: ""
	I0906 20:07:18.655827   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.655842   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:18.655849   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:18.655908   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:18.693655   73230 cri.go:89] found id: ""
	I0906 20:07:18.693679   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.693689   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:18.693696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:18.693779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:18.730882   73230 cri.go:89] found id: ""
	I0906 20:07:18.730914   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.730924   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:18.730931   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:18.730994   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:18.767219   73230 cri.go:89] found id: ""
	I0906 20:07:18.767243   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.767250   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:18.767256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:18.767316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:18.802207   73230 cri.go:89] found id: ""
	I0906 20:07:18.802230   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.802238   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:18.802243   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:18.802300   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:18.840449   73230 cri.go:89] found id: ""
	I0906 20:07:18.840471   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.840481   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:18.840491   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:18.840504   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:18.892430   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:18.892469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:18.906527   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:18.906561   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:18.980462   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:18.980483   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:18.980494   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:19.059550   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:19.059588   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:17.340292   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:19.840090   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.156486   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:20.657764   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.693021   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.191529   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.191865   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.599879   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:21.614131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:21.614205   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:21.650887   73230 cri.go:89] found id: ""
	I0906 20:07:21.650910   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.650919   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:21.650924   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:21.650978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:21.684781   73230 cri.go:89] found id: ""
	I0906 20:07:21.684809   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.684819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:21.684827   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:21.684907   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:21.722685   73230 cri.go:89] found id: ""
	I0906 20:07:21.722711   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.722722   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:21.722729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:21.722791   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:21.757581   73230 cri.go:89] found id: ""
	I0906 20:07:21.757607   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.757616   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:21.757622   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:21.757670   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:21.791984   73230 cri.go:89] found id: ""
	I0906 20:07:21.792008   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.792016   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:21.792022   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:21.792072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:21.853612   73230 cri.go:89] found id: ""
	I0906 20:07:21.853636   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.853644   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:21.853650   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:21.853699   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:21.894184   73230 cri.go:89] found id: ""
	I0906 20:07:21.894232   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.894247   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:21.894256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:21.894318   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:21.930731   73230 cri.go:89] found id: ""
	I0906 20:07:21.930758   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.930768   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:21.930779   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:21.930798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:21.969174   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:21.969207   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:22.017647   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:22.017680   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:22.033810   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:22.033852   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:22.111503   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:22.111530   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:22.111544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:24.696348   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:24.710428   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:24.710506   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:24.747923   73230 cri.go:89] found id: ""
	I0906 20:07:24.747958   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.747969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:24.747977   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:24.748037   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:24.782216   73230 cri.go:89] found id: ""
	I0906 20:07:24.782250   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.782260   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:24.782268   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:24.782329   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:24.822093   73230 cri.go:89] found id: ""
	I0906 20:07:24.822126   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.822137   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:24.822148   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:24.822217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:24.857166   73230 cri.go:89] found id: ""
	I0906 20:07:24.857202   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.857213   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:24.857224   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:24.857314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:24.892575   73230 cri.go:89] found id: ""
	I0906 20:07:24.892610   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.892621   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:24.892629   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:24.892689   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:24.929102   73230 cri.go:89] found id: ""
	I0906 20:07:24.929130   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.929140   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:24.929149   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:24.929206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:24.964224   73230 cri.go:89] found id: ""
	I0906 20:07:24.964257   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.964268   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:24.964276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:24.964337   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:25.000453   73230 cri.go:89] found id: ""
	I0906 20:07:25.000475   73230 logs.go:276] 0 containers: []
	W0906 20:07:25.000485   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:25.000496   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:25.000511   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:25.041824   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:25.041851   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:25.093657   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:25.093692   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:25.107547   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:25.107576   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:25.178732   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:25.178755   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:25.178771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:22.338864   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:24.339432   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:26.838165   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.156449   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.156979   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.158086   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.192653   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.693480   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.764271   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:27.777315   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:27.777389   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:27.812621   73230 cri.go:89] found id: ""
	I0906 20:07:27.812644   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.812655   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:27.812663   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:27.812718   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:27.853063   73230 cri.go:89] found id: ""
	I0906 20:07:27.853093   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.853104   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:27.853112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:27.853171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:27.894090   73230 cri.go:89] found id: ""
	I0906 20:07:27.894118   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.894130   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:27.894137   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:27.894196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:27.930764   73230 cri.go:89] found id: ""
	I0906 20:07:27.930791   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.930802   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:27.930809   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:27.930870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:27.967011   73230 cri.go:89] found id: ""
	I0906 20:07:27.967036   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.967047   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:27.967053   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:27.967111   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:28.002119   73230 cri.go:89] found id: ""
	I0906 20:07:28.002146   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.002157   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:28.002164   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:28.002226   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:28.043884   73230 cri.go:89] found id: ""
	I0906 20:07:28.043909   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.043917   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:28.043923   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:28.043979   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:28.081510   73230 cri.go:89] found id: ""
	I0906 20:07:28.081538   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.081547   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:28.081557   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:28.081568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:28.159077   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:28.159109   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:28.207489   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:28.207527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:28.267579   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:28.267613   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:28.287496   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:28.287529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:28.376555   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:28.838301   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.843091   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:29.655598   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:31.657757   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.192112   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:32.692354   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.876683   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:30.890344   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:30.890424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:30.930618   73230 cri.go:89] found id: ""
	I0906 20:07:30.930647   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.930658   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:30.930666   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:30.930727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:30.968801   73230 cri.go:89] found id: ""
	I0906 20:07:30.968825   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.968834   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:30.968839   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:30.968911   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:31.006437   73230 cri.go:89] found id: ""
	I0906 20:07:31.006463   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.006472   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:31.006477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:31.006531   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:31.042091   73230 cri.go:89] found id: ""
	I0906 20:07:31.042117   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.042125   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:31.042131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:31.042177   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:31.079244   73230 cri.go:89] found id: ""
	I0906 20:07:31.079271   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.079280   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:31.079286   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:31.079336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:31.116150   73230 cri.go:89] found id: ""
	I0906 20:07:31.116174   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.116182   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:31.116188   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:31.116240   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:31.151853   73230 cri.go:89] found id: ""
	I0906 20:07:31.151877   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.151886   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:31.151892   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:31.151939   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:31.189151   73230 cri.go:89] found id: ""
	I0906 20:07:31.189181   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.189192   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:31.189203   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:31.189218   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:31.234466   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:31.234493   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:31.286254   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:31.286288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:31.300500   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:31.300525   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:31.372968   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:31.372987   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:31.372997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:33.949865   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:33.964791   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:33.964849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:34.027049   73230 cri.go:89] found id: ""
	I0906 20:07:34.027082   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.027094   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:34.027102   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:34.027162   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:34.080188   73230 cri.go:89] found id: ""
	I0906 20:07:34.080218   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.080230   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:34.080237   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:34.080320   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:34.124146   73230 cri.go:89] found id: ""
	I0906 20:07:34.124171   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.124179   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:34.124185   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:34.124230   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:34.161842   73230 cri.go:89] found id: ""
	I0906 20:07:34.161864   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.161872   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:34.161878   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:34.161938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:34.201923   73230 cri.go:89] found id: ""
	I0906 20:07:34.201951   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.201961   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:34.201967   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:34.202032   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:34.246609   73230 cri.go:89] found id: ""
	I0906 20:07:34.246644   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.246656   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:34.246665   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:34.246739   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:34.287616   73230 cri.go:89] found id: ""
	I0906 20:07:34.287646   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.287657   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:34.287663   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:34.287721   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:34.322270   73230 cri.go:89] found id: ""
	I0906 20:07:34.322297   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.322309   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:34.322320   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:34.322334   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:34.378598   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:34.378633   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:34.392748   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:34.392781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:34.468620   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:34.468648   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:34.468663   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:34.548290   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:34.548324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:33.339665   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.339890   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:34.157895   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:36.656829   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.192386   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.192574   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.095962   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:37.110374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:37.110459   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:37.146705   73230 cri.go:89] found id: ""
	I0906 20:07:37.146732   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.146740   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:37.146746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:37.146802   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:37.185421   73230 cri.go:89] found id: ""
	I0906 20:07:37.185449   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.185461   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:37.185468   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:37.185532   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:37.224767   73230 cri.go:89] found id: ""
	I0906 20:07:37.224793   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.224801   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:37.224806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:37.224884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:37.265392   73230 cri.go:89] found id: ""
	I0906 20:07:37.265422   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.265432   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:37.265438   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:37.265496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:37.302065   73230 cri.go:89] found id: ""
	I0906 20:07:37.302093   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.302101   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:37.302107   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:37.302171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:37.341466   73230 cri.go:89] found id: ""
	I0906 20:07:37.341493   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.341505   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:37.341513   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:37.341576   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.377701   73230 cri.go:89] found id: ""
	I0906 20:07:37.377724   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.377732   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:37.377738   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:37.377798   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:37.412927   73230 cri.go:89] found id: ""
	I0906 20:07:37.412955   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.412966   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:37.412977   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:37.412993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:37.427750   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:37.427776   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:37.500904   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:37.500928   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:37.500945   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:37.583204   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:37.583246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:37.623477   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:37.623512   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.179798   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:40.194295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:40.194372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:40.229731   73230 cri.go:89] found id: ""
	I0906 20:07:40.229768   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.229779   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:40.229787   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:40.229848   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:40.275909   73230 cri.go:89] found id: ""
	I0906 20:07:40.275943   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.275956   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:40.275964   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:40.276049   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:40.316552   73230 cri.go:89] found id: ""
	I0906 20:07:40.316585   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.316594   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:40.316599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:40.316647   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:40.355986   73230 cri.go:89] found id: ""
	I0906 20:07:40.356017   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.356028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:40.356036   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:40.356095   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:40.396486   73230 cri.go:89] found id: ""
	I0906 20:07:40.396522   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.396535   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:40.396544   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:40.396609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:40.440311   73230 cri.go:89] found id: ""
	I0906 20:07:40.440338   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.440346   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:40.440352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:40.440414   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.346532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.839521   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.156737   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.156967   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.691703   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.691972   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:40.476753   73230 cri.go:89] found id: ""
	I0906 20:07:40.476781   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.476790   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:40.476797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:40.476844   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:40.514462   73230 cri.go:89] found id: ""
	I0906 20:07:40.514489   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.514500   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:40.514511   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:40.514527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:40.553670   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:40.553700   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.608304   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:40.608343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:40.622486   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:40.622514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:40.699408   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:40.699434   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:40.699451   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.278892   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:43.292455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:43.292526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:43.328900   73230 cri.go:89] found id: ""
	I0906 20:07:43.328929   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.328940   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:43.328948   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:43.329009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:43.366728   73230 cri.go:89] found id: ""
	I0906 20:07:43.366754   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.366762   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:43.366768   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:43.366817   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:43.401566   73230 cri.go:89] found id: ""
	I0906 20:07:43.401590   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.401599   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:43.401604   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:43.401650   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:43.437022   73230 cri.go:89] found id: ""
	I0906 20:07:43.437051   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.437063   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:43.437072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:43.437140   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:43.473313   73230 cri.go:89] found id: ""
	I0906 20:07:43.473342   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.473354   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:43.473360   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:43.473420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:43.513590   73230 cri.go:89] found id: ""
	I0906 20:07:43.513616   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.513624   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:43.513630   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:43.513690   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:43.549974   73230 cri.go:89] found id: ""
	I0906 20:07:43.550011   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.550025   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:43.550032   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:43.550100   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:43.592386   73230 cri.go:89] found id: ""
	I0906 20:07:43.592426   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.592444   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:43.592454   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:43.592482   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:43.607804   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:43.607841   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:43.679533   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:43.679568   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:43.679580   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.762111   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:43.762145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:43.802883   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:43.802908   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:42.340252   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:44.838648   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.838831   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.157956   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.657410   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.693014   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.693640   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.191509   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.358429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:46.371252   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:46.371326   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:46.406397   73230 cri.go:89] found id: ""
	I0906 20:07:46.406420   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.406430   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:46.406437   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:46.406496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:46.452186   73230 cri.go:89] found id: ""
	I0906 20:07:46.452209   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.452218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:46.452223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:46.452288   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:46.489418   73230 cri.go:89] found id: ""
	I0906 20:07:46.489443   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.489454   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:46.489461   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:46.489523   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:46.529650   73230 cri.go:89] found id: ""
	I0906 20:07:46.529679   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.529690   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:46.529698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:46.529760   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:46.566429   73230 cri.go:89] found id: ""
	I0906 20:07:46.566454   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.566466   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:46.566474   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:46.566539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:46.604999   73230 cri.go:89] found id: ""
	I0906 20:07:46.605026   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.605034   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:46.605040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:46.605085   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:46.643116   73230 cri.go:89] found id: ""
	I0906 20:07:46.643144   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.643155   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:46.643162   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:46.643222   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:46.679734   73230 cri.go:89] found id: ""
	I0906 20:07:46.679756   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.679764   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:46.679772   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:46.679784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:46.736380   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:46.736430   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:46.750649   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:46.750681   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:46.833098   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:46.833130   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:46.833146   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:46.912223   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:46.912267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.453662   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:49.466520   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:49.466585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:49.508009   73230 cri.go:89] found id: ""
	I0906 20:07:49.508038   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.508049   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:49.508056   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:49.508119   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:49.545875   73230 cri.go:89] found id: ""
	I0906 20:07:49.545900   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.545911   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:49.545918   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:49.545978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:49.584899   73230 cri.go:89] found id: ""
	I0906 20:07:49.584926   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.584933   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:49.584940   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:49.585001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:49.621044   73230 cri.go:89] found id: ""
	I0906 20:07:49.621073   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.621085   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:49.621092   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:49.621146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:49.657074   73230 cri.go:89] found id: ""
	I0906 20:07:49.657099   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.657108   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:49.657115   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:49.657174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:49.693734   73230 cri.go:89] found id: ""
	I0906 20:07:49.693759   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.693767   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:49.693773   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:49.693827   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:49.729920   73230 cri.go:89] found id: ""
	I0906 20:07:49.729950   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.729960   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:49.729965   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:49.730014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:49.765282   73230 cri.go:89] found id: ""
	I0906 20:07:49.765313   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.765324   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:49.765335   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:49.765350   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:49.842509   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:49.842531   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:49.842543   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:49.920670   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:49.920704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.961193   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:49.961220   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:50.014331   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:50.014366   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:48.839877   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:51.339381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.156290   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.157337   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.692055   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:53.191487   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.529758   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:52.543533   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:52.543596   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:52.582802   73230 cri.go:89] found id: ""
	I0906 20:07:52.582826   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.582838   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:52.582845   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:52.582909   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:52.625254   73230 cri.go:89] found id: ""
	I0906 20:07:52.625287   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.625308   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:52.625317   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:52.625383   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:52.660598   73230 cri.go:89] found id: ""
	I0906 20:07:52.660621   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.660632   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:52.660640   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:52.660703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:52.702980   73230 cri.go:89] found id: ""
	I0906 20:07:52.703004   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.703014   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:52.703021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:52.703082   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:52.740361   73230 cri.go:89] found id: ""
	I0906 20:07:52.740387   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.740394   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:52.740400   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:52.740447   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:52.780011   73230 cri.go:89] found id: ""
	I0906 20:07:52.780043   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.780056   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:52.780063   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:52.780123   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:52.825546   73230 cri.go:89] found id: ""
	I0906 20:07:52.825583   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.825595   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:52.825602   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:52.825659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:52.864347   73230 cri.go:89] found id: ""
	I0906 20:07:52.864381   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.864393   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:52.864403   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:52.864417   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:52.943041   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:52.943077   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:52.986158   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:52.986185   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:53.039596   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:53.039635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:53.054265   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:53.054295   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:53.125160   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:53.339887   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.839233   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.657521   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.157101   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.192803   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.692328   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.626058   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:55.639631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:55.639705   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:55.677283   73230 cri.go:89] found id: ""
	I0906 20:07:55.677304   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.677312   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:55.677317   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:55.677372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:55.714371   73230 cri.go:89] found id: ""
	I0906 20:07:55.714402   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.714414   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:55.714422   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:55.714509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:55.753449   73230 cri.go:89] found id: ""
	I0906 20:07:55.753487   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.753500   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:55.753507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:55.753575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:55.792955   73230 cri.go:89] found id: ""
	I0906 20:07:55.792987   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.792999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:55.793006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:55.793074   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:55.827960   73230 cri.go:89] found id: ""
	I0906 20:07:55.827985   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.827996   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:55.828003   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:55.828052   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:55.867742   73230 cri.go:89] found id: ""
	I0906 20:07:55.867765   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.867778   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:55.867785   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:55.867849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:55.907328   73230 cri.go:89] found id: ""
	I0906 20:07:55.907352   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.907359   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:55.907365   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:55.907424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:55.946057   73230 cri.go:89] found id: ""
	I0906 20:07:55.946091   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.946099   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:55.946108   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:55.946119   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:56.033579   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:56.033598   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:56.033611   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:56.116337   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:56.116372   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:56.163397   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:56.163428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:56.217189   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:56.217225   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:58.736147   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:58.749729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:58.749833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:58.786375   73230 cri.go:89] found id: ""
	I0906 20:07:58.786399   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.786406   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:58.786412   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:58.786460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:58.825188   73230 cri.go:89] found id: ""
	I0906 20:07:58.825210   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.825218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:58.825223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:58.825271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:58.866734   73230 cri.go:89] found id: ""
	I0906 20:07:58.866756   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.866764   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:58.866769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:58.866823   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:58.909742   73230 cri.go:89] found id: ""
	I0906 20:07:58.909774   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.909785   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:58.909793   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:58.909850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:58.950410   73230 cri.go:89] found id: ""
	I0906 20:07:58.950438   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.950447   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:58.950452   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:58.950500   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:58.987431   73230 cri.go:89] found id: ""
	I0906 20:07:58.987454   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.987462   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:58.987468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:58.987518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:59.023432   73230 cri.go:89] found id: ""
	I0906 20:07:59.023462   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.023474   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:59.023482   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:59.023544   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:59.057695   73230 cri.go:89] found id: ""
	I0906 20:07:59.057724   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.057734   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:59.057743   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:59.057755   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:59.109634   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:59.109671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:59.125436   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:59.125479   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:59.202018   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:59.202040   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:59.202054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:59.281418   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:59.281456   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:58.339751   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.842794   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.658145   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.155679   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.157913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.192179   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.193068   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:01.823947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:01.839055   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:01.839115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:01.876178   73230 cri.go:89] found id: ""
	I0906 20:08:01.876206   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.876215   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:01.876220   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:01.876274   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:01.912000   73230 cri.go:89] found id: ""
	I0906 20:08:01.912028   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.912038   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:01.912045   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:01.912107   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:01.948382   73230 cri.go:89] found id: ""
	I0906 20:08:01.948412   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.948420   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:01.948426   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:01.948474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:01.982991   73230 cri.go:89] found id: ""
	I0906 20:08:01.983019   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.983028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:01.983033   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:01.983080   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:02.016050   73230 cri.go:89] found id: ""
	I0906 20:08:02.016076   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.016085   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:02.016091   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:02.016151   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:02.051087   73230 cri.go:89] found id: ""
	I0906 20:08:02.051125   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.051137   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:02.051150   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:02.051214   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:02.093230   73230 cri.go:89] found id: ""
	I0906 20:08:02.093254   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.093263   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:02.093268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:02.093323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:02.130580   73230 cri.go:89] found id: ""
	I0906 20:08:02.130609   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.130619   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:02.130629   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:02.130644   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:02.183192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:02.183231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:02.199079   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:02.199110   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:02.274259   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:02.274279   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:02.274303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:02.356198   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:02.356234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:04.899180   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:04.912879   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:04.912955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:04.950598   73230 cri.go:89] found id: ""
	I0906 20:08:04.950632   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.950642   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:04.950656   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:04.950713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:04.986474   73230 cri.go:89] found id: ""
	I0906 20:08:04.986504   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.986513   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:04.986519   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:04.986570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:05.025837   73230 cri.go:89] found id: ""
	I0906 20:08:05.025868   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.025877   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:05.025884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:05.025934   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:05.063574   73230 cri.go:89] found id: ""
	I0906 20:08:05.063613   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.063622   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:05.063628   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:05.063674   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:05.101341   73230 cri.go:89] found id: ""
	I0906 20:08:05.101371   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.101383   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:05.101390   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:05.101461   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:05.148551   73230 cri.go:89] found id: ""
	I0906 20:08:05.148580   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.148591   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:05.148599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:05.148668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:05.186907   73230 cri.go:89] found id: ""
	I0906 20:08:05.186935   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.186945   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:05.186953   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:05.187019   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:05.226237   73230 cri.go:89] found id: ""
	I0906 20:08:05.226265   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.226275   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:05.226287   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:05.226300   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:05.242892   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:05.242925   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:05.317797   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:05.317824   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:05.317839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:05.400464   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:05.400500   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:05.442632   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:05.442657   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:03.340541   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:05.840156   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.655913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:06.657424   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.691255   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.191739   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.998033   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:08.012363   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:08.012441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:08.048816   73230 cri.go:89] found id: ""
	I0906 20:08:08.048847   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.048876   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:08.048884   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:08.048947   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:08.109623   73230 cri.go:89] found id: ""
	I0906 20:08:08.109650   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.109661   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:08.109668   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:08.109730   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:08.145405   73230 cri.go:89] found id: ""
	I0906 20:08:08.145432   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.145443   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:08.145451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:08.145514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:08.187308   73230 cri.go:89] found id: ""
	I0906 20:08:08.187344   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.187355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:08.187362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:08.187422   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:08.228782   73230 cri.go:89] found id: ""
	I0906 20:08:08.228815   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.228826   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:08.228833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:08.228918   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:08.269237   73230 cri.go:89] found id: ""
	I0906 20:08:08.269266   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.269276   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:08.269285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:08.269351   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:08.305115   73230 cri.go:89] found id: ""
	I0906 20:08:08.305141   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.305149   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:08.305155   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:08.305206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:08.345442   73230 cri.go:89] found id: ""
	I0906 20:08:08.345472   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.345483   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:08.345494   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:08.345510   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:08.396477   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:08.396518   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:08.410978   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:08.411002   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:08.486220   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:08.486247   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:08.486265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:08.574138   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:08.574190   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:08.339280   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:10.340142   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.156809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.160037   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.192303   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.192456   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.192684   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.117545   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:11.131884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:11.131944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:11.169481   73230 cri.go:89] found id: ""
	I0906 20:08:11.169507   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.169518   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:11.169525   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:11.169590   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:11.211068   73230 cri.go:89] found id: ""
	I0906 20:08:11.211092   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.211100   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:11.211105   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:11.211157   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:11.250526   73230 cri.go:89] found id: ""
	I0906 20:08:11.250560   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.250574   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:11.250580   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:11.250627   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:11.289262   73230 cri.go:89] found id: ""
	I0906 20:08:11.289284   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.289292   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:11.289299   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:11.289346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:11.335427   73230 cri.go:89] found id: ""
	I0906 20:08:11.335456   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.335467   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:11.335475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:11.335535   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:11.375481   73230 cri.go:89] found id: ""
	I0906 20:08:11.375509   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.375518   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:11.375524   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:11.375575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:11.416722   73230 cri.go:89] found id: ""
	I0906 20:08:11.416748   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.416758   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:11.416765   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:11.416830   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:11.452986   73230 cri.go:89] found id: ""
	I0906 20:08:11.453019   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.453030   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:11.453042   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:11.453059   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:11.466435   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:11.466461   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:11.545185   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:11.545212   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:11.545231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:11.627390   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:11.627422   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:11.674071   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:11.674098   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.225887   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:14.242121   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:14.242200   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:14.283024   73230 cri.go:89] found id: ""
	I0906 20:08:14.283055   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.283067   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:14.283074   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:14.283135   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:14.325357   73230 cri.go:89] found id: ""
	I0906 20:08:14.325379   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.325387   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:14.325392   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:14.325455   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:14.362435   73230 cri.go:89] found id: ""
	I0906 20:08:14.362459   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.362467   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:14.362473   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:14.362537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:14.398409   73230 cri.go:89] found id: ""
	I0906 20:08:14.398441   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.398450   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:14.398455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:14.398509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:14.434902   73230 cri.go:89] found id: ""
	I0906 20:08:14.434934   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.434943   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:14.434950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:14.435009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:14.476605   73230 cri.go:89] found id: ""
	I0906 20:08:14.476635   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.476647   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:14.476655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:14.476717   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:14.533656   73230 cri.go:89] found id: ""
	I0906 20:08:14.533681   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.533690   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:14.533696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:14.533753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:14.599661   73230 cri.go:89] found id: ""
	I0906 20:08:14.599685   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.599693   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:14.599702   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:14.599715   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.657680   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:14.657712   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:14.671594   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:14.671624   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:14.747945   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:14.747969   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:14.747979   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:14.829021   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:14.829057   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:12.838805   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:14.839569   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.659405   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:16.156840   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:15.692205   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.693709   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.373569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:17.388910   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:17.388987   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:17.428299   73230 cri.go:89] found id: ""
	I0906 20:08:17.428335   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.428347   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:17.428354   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:17.428419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:17.464660   73230 cri.go:89] found id: ""
	I0906 20:08:17.464685   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.464692   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:17.464697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:17.464758   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:17.500018   73230 cri.go:89] found id: ""
	I0906 20:08:17.500047   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.500059   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:17.500067   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:17.500130   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:17.536345   73230 cri.go:89] found id: ""
	I0906 20:08:17.536375   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.536386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:17.536394   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:17.536456   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:17.574668   73230 cri.go:89] found id: ""
	I0906 20:08:17.574696   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.574707   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:17.574715   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:17.574780   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:17.611630   73230 cri.go:89] found id: ""
	I0906 20:08:17.611653   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.611663   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:17.611669   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:17.611713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:17.647610   73230 cri.go:89] found id: ""
	I0906 20:08:17.647639   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.647649   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:17.647657   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:17.647724   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:17.686204   73230 cri.go:89] found id: ""
	I0906 20:08:17.686233   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.686246   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:17.686260   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:17.686273   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:17.702040   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:17.702069   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:17.775033   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:17.775058   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:17.775074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:17.862319   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:17.862359   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:17.905567   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:17.905604   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:17.339116   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:19.839554   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:21.839622   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:18.157104   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.657604   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.191024   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:22.192687   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.457191   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:20.471413   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:20.471474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:20.533714   73230 cri.go:89] found id: ""
	I0906 20:08:20.533749   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.533765   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:20.533772   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:20.533833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:20.580779   73230 cri.go:89] found id: ""
	I0906 20:08:20.580811   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.580823   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:20.580830   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:20.580902   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:20.619729   73230 cri.go:89] found id: ""
	I0906 20:08:20.619755   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.619763   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:20.619769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:20.619816   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:20.661573   73230 cri.go:89] found id: ""
	I0906 20:08:20.661599   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.661606   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:20.661612   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:20.661664   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:20.709409   73230 cri.go:89] found id: ""
	I0906 20:08:20.709443   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.709455   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:20.709463   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:20.709515   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:20.746743   73230 cri.go:89] found id: ""
	I0906 20:08:20.746783   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.746808   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:20.746816   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:20.746891   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:20.788129   73230 cri.go:89] found id: ""
	I0906 20:08:20.788155   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.788164   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:20.788170   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:20.788217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:20.825115   73230 cri.go:89] found id: ""
	I0906 20:08:20.825139   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.825147   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:20.825156   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:20.825167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:20.880975   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:20.881013   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:20.895027   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:20.895061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:20.972718   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:20.972739   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:20.972754   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:21.053062   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:21.053096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:23.595439   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:23.612354   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:23.612419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:23.654479   73230 cri.go:89] found id: ""
	I0906 20:08:23.654508   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.654519   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:23.654526   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:23.654591   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:23.690061   73230 cri.go:89] found id: ""
	I0906 20:08:23.690092   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.690103   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:23.690112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:23.690173   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:23.726644   73230 cri.go:89] found id: ""
	I0906 20:08:23.726670   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.726678   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:23.726684   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:23.726744   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:23.763348   73230 cri.go:89] found id: ""
	I0906 20:08:23.763378   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.763386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:23.763391   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:23.763452   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:23.799260   73230 cri.go:89] found id: ""
	I0906 20:08:23.799290   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.799299   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:23.799305   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:23.799359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:23.843438   73230 cri.go:89] found id: ""
	I0906 20:08:23.843470   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.843481   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:23.843489   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:23.843558   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:23.879818   73230 cri.go:89] found id: ""
	I0906 20:08:23.879847   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.879856   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:23.879867   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:23.879933   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:23.916182   73230 cri.go:89] found id: ""
	I0906 20:08:23.916207   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.916220   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:23.916229   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:23.916240   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:23.987003   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:23.987022   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:23.987033   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:24.073644   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:24.073684   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:24.118293   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:24.118328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:24.172541   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:24.172582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:23.840441   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.338539   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:23.155661   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:25.155855   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:27.157624   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:24.692350   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.692534   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.687747   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:26.702174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:26.702238   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:26.740064   73230 cri.go:89] found id: ""
	I0906 20:08:26.740093   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.740101   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:26.740108   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:26.740158   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:26.775198   73230 cri.go:89] found id: ""
	I0906 20:08:26.775227   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.775237   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:26.775244   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:26.775303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:26.808850   73230 cri.go:89] found id: ""
	I0906 20:08:26.808892   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.808903   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:26.808915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:26.808974   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:26.842926   73230 cri.go:89] found id: ""
	I0906 20:08:26.842953   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.842964   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:26.842972   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:26.843031   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:26.878621   73230 cri.go:89] found id: ""
	I0906 20:08:26.878649   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.878658   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:26.878664   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:26.878713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:26.921816   73230 cri.go:89] found id: ""
	I0906 20:08:26.921862   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.921875   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:26.921884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:26.921952   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:26.960664   73230 cri.go:89] found id: ""
	I0906 20:08:26.960692   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.960702   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:26.960709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:26.960771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:27.004849   73230 cri.go:89] found id: ""
	I0906 20:08:27.004904   73230 logs.go:276] 0 containers: []
	W0906 20:08:27.004913   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:27.004922   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:27.004934   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:27.056237   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:27.056267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:27.071882   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:27.071904   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:27.143927   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:27.143949   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:27.143961   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:27.223901   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:27.223935   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:29.766615   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:29.780295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:29.780367   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:29.817745   73230 cri.go:89] found id: ""
	I0906 20:08:29.817775   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.817784   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:29.817790   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:29.817852   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:29.855536   73230 cri.go:89] found id: ""
	I0906 20:08:29.855559   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.855567   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:29.855572   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:29.855628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:29.895043   73230 cri.go:89] found id: ""
	I0906 20:08:29.895092   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.895104   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:29.895111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:29.895178   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:29.939225   73230 cri.go:89] found id: ""
	I0906 20:08:29.939248   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.939256   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:29.939262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:29.939331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:29.974166   73230 cri.go:89] found id: ""
	I0906 20:08:29.974190   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.974198   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:29.974203   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:29.974258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:30.009196   73230 cri.go:89] found id: ""
	I0906 20:08:30.009226   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.009237   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:30.009245   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:30.009310   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:30.043939   73230 cri.go:89] found id: ""
	I0906 20:08:30.043962   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.043970   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:30.043976   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:30.044023   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:30.080299   73230 cri.go:89] found id: ""
	I0906 20:08:30.080328   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.080336   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:30.080345   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:30.080356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:30.131034   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:30.131068   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:30.145502   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:30.145536   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:30.219941   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:30.219963   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:30.219978   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:30.307958   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:30.307995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:28.839049   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.338815   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.656748   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.657112   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.192284   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.193181   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.854002   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:32.867937   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:32.867998   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:32.906925   73230 cri.go:89] found id: ""
	I0906 20:08:32.906957   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.906969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:32.906976   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:32.907038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:32.946662   73230 cri.go:89] found id: ""
	I0906 20:08:32.946691   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.946702   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:32.946710   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:32.946771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:32.981908   73230 cri.go:89] found id: ""
	I0906 20:08:32.981936   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.981944   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:32.981950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:32.982001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:33.014902   73230 cri.go:89] found id: ""
	I0906 20:08:33.014930   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.014939   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:33.014945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:33.015055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:33.051265   73230 cri.go:89] found id: ""
	I0906 20:08:33.051290   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.051298   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:33.051310   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:33.051363   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:33.085436   73230 cri.go:89] found id: ""
	I0906 20:08:33.085468   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.085480   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:33.085487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:33.085552   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:33.121483   73230 cri.go:89] found id: ""
	I0906 20:08:33.121509   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.121517   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:33.121523   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:33.121578   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:33.159883   73230 cri.go:89] found id: ""
	I0906 20:08:33.159915   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.159926   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:33.159937   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:33.159953   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:33.174411   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:33.174442   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:33.243656   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:33.243694   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:33.243710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:33.321782   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:33.321823   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:33.363299   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:33.363335   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:33.339645   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.839545   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.650358   72441 pod_ready.go:82] duration metric: took 4m0.000296679s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:32.650386   72441 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:32.650410   72441 pod_ready.go:39] duration metric: took 4m12.042795571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:32.650440   72441 kubeadm.go:597] duration metric: took 4m19.97234293s to restartPrimaryControlPlane
	W0906 20:08:32.650505   72441 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:32.650542   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:33.692877   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:36.192090   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:38.192465   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.916159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:35.929190   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:35.929265   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:35.962853   73230 cri.go:89] found id: ""
	I0906 20:08:35.962890   73230 logs.go:276] 0 containers: []
	W0906 20:08:35.962901   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:35.962909   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:35.962969   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:36.000265   73230 cri.go:89] found id: ""
	I0906 20:08:36.000309   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.000318   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:36.000324   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:36.000374   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:36.042751   73230 cri.go:89] found id: ""
	I0906 20:08:36.042781   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.042792   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:36.042800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:36.042859   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:36.077922   73230 cri.go:89] found id: ""
	I0906 20:08:36.077957   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.077967   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:36.077975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:36.078038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:36.114890   73230 cri.go:89] found id: ""
	I0906 20:08:36.114926   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.114937   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:36.114945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:36.114997   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:36.148058   73230 cri.go:89] found id: ""
	I0906 20:08:36.148089   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.148101   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:36.148108   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:36.148167   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:36.187334   73230 cri.go:89] found id: ""
	I0906 20:08:36.187361   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.187371   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:36.187379   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:36.187498   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:36.221295   73230 cri.go:89] found id: ""
	I0906 20:08:36.221331   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.221342   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:36.221353   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:36.221367   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:36.273489   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:36.273527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:36.287975   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:36.288005   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:36.366914   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:36.366937   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:36.366950   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:36.446582   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:36.446619   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.987075   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:39.001051   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:39.001113   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:39.038064   73230 cri.go:89] found id: ""
	I0906 20:08:39.038093   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.038103   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:39.038110   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:39.038175   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:39.075759   73230 cri.go:89] found id: ""
	I0906 20:08:39.075788   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.075799   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:39.075805   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:39.075866   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:39.113292   73230 cri.go:89] found id: ""
	I0906 20:08:39.113320   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.113331   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:39.113339   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:39.113404   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:39.157236   73230 cri.go:89] found id: ""
	I0906 20:08:39.157269   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.157281   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:39.157289   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:39.157362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:39.195683   73230 cri.go:89] found id: ""
	I0906 20:08:39.195704   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.195712   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:39.195717   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:39.195763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:39.234865   73230 cri.go:89] found id: ""
	I0906 20:08:39.234894   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.234903   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:39.234909   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:39.234961   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:39.269946   73230 cri.go:89] found id: ""
	I0906 20:08:39.269975   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.269983   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:39.269989   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:39.270034   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:39.306184   73230 cri.go:89] found id: ""
	I0906 20:08:39.306214   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.306225   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:39.306235   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:39.306249   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:39.357887   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:39.357920   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:39.371736   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:39.371767   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:39.445674   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:39.445695   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:39.445708   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:39.525283   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:39.525316   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.343370   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.839247   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.691846   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.694807   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.069066   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:42.083229   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:42.083313   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:42.124243   73230 cri.go:89] found id: ""
	I0906 20:08:42.124267   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.124275   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:42.124280   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:42.124330   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:42.162070   73230 cri.go:89] found id: ""
	I0906 20:08:42.162102   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.162113   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:42.162120   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:42.162183   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:42.199161   73230 cri.go:89] found id: ""
	I0906 20:08:42.199191   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.199201   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:42.199208   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:42.199266   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:42.236956   73230 cri.go:89] found id: ""
	I0906 20:08:42.236980   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.236991   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:42.236996   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:42.237068   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:42.272299   73230 cri.go:89] found id: ""
	I0906 20:08:42.272328   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.272336   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:42.272341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:42.272400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:42.310280   73230 cri.go:89] found id: ""
	I0906 20:08:42.310304   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.310312   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:42.310317   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:42.310362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:42.345850   73230 cri.go:89] found id: ""
	I0906 20:08:42.345873   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.345881   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:42.345887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:42.345937   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:42.380785   73230 cri.go:89] found id: ""
	I0906 20:08:42.380812   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.380820   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:42.380830   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:42.380843   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.435803   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:42.435839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:42.450469   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:42.450498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:42.521565   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:42.521587   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:42.521599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:42.595473   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:42.595508   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:45.136985   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:45.150468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:45.150540   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:45.186411   73230 cri.go:89] found id: ""
	I0906 20:08:45.186440   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.186448   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:45.186454   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:45.186521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:45.224463   73230 cri.go:89] found id: ""
	I0906 20:08:45.224495   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.224506   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:45.224513   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:45.224568   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:45.262259   73230 cri.go:89] found id: ""
	I0906 20:08:45.262286   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.262295   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:45.262301   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:45.262357   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:45.299463   73230 cri.go:89] found id: ""
	I0906 20:08:45.299492   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.299501   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:45.299507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:45.299561   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:45.336125   73230 cri.go:89] found id: ""
	I0906 20:08:45.336153   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.336162   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:45.336168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:45.336216   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:45.370397   73230 cri.go:89] found id: ""
	I0906 20:08:45.370427   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.370439   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:45.370448   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:45.370518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:45.406290   73230 cri.go:89] found id: ""
	I0906 20:08:45.406322   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.406333   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:45.406341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:45.406402   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:45.441560   73230 cri.go:89] found id: ""
	I0906 20:08:45.441592   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.441603   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:45.441614   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:45.441627   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.840127   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.349331   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.192059   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:47.691416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.508769   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:45.508811   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:45.523659   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:45.523696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:45.595544   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:45.595567   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:45.595582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:45.676060   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:45.676096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:48.216490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:48.230021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:48.230093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:48.267400   73230 cri.go:89] found id: ""
	I0906 20:08:48.267433   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.267444   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:48.267451   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:48.267519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:48.314694   73230 cri.go:89] found id: ""
	I0906 20:08:48.314722   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.314731   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:48.314739   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:48.314805   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:48.358861   73230 cri.go:89] found id: ""
	I0906 20:08:48.358895   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.358906   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:48.358915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:48.358990   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:48.398374   73230 cri.go:89] found id: ""
	I0906 20:08:48.398400   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.398410   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:48.398416   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:48.398488   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:48.438009   73230 cri.go:89] found id: ""
	I0906 20:08:48.438039   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.438050   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:48.438058   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:48.438115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:48.475970   73230 cri.go:89] found id: ""
	I0906 20:08:48.475998   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.476007   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:48.476013   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:48.476071   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:48.512191   73230 cri.go:89] found id: ""
	I0906 20:08:48.512220   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.512230   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:48.512237   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:48.512299   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:48.547820   73230 cri.go:89] found id: ""
	I0906 20:08:48.547850   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.547861   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:48.547872   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:48.547886   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:48.616962   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:48.616997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:48.631969   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:48.631998   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:48.717025   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:48.717043   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:48.717054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:48.796131   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:48.796167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:47.838558   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.839063   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.839099   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.693239   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:52.191416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.342030   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:51.355761   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:51.355845   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:51.395241   73230 cri.go:89] found id: ""
	I0906 20:08:51.395272   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.395283   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:51.395290   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:51.395350   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:51.433860   73230 cri.go:89] found id: ""
	I0906 20:08:51.433888   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.433897   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:51.433904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:51.433968   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:51.475568   73230 cri.go:89] found id: ""
	I0906 20:08:51.475598   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.475608   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:51.475615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:51.475678   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:51.512305   73230 cri.go:89] found id: ""
	I0906 20:08:51.512329   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.512337   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:51.512342   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:51.512391   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:51.545796   73230 cri.go:89] found id: ""
	I0906 20:08:51.545819   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.545827   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:51.545833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:51.545884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:51.578506   73230 cri.go:89] found id: ""
	I0906 20:08:51.578531   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.578539   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:51.578545   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:51.578609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:51.616571   73230 cri.go:89] found id: ""
	I0906 20:08:51.616596   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.616609   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:51.616615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:51.616660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:51.651542   73230 cri.go:89] found id: ""
	I0906 20:08:51.651566   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.651580   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:51.651588   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:51.651599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:51.705160   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:51.705193   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:51.719450   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:51.719477   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:51.789775   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:51.789796   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:51.789809   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:51.870123   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:51.870158   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.411818   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:54.425759   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:54.425818   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:54.467920   73230 cri.go:89] found id: ""
	I0906 20:08:54.467943   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.467951   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:54.467956   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:54.468008   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:54.508324   73230 cri.go:89] found id: ""
	I0906 20:08:54.508349   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.508357   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:54.508363   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:54.508410   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:54.544753   73230 cri.go:89] found id: ""
	I0906 20:08:54.544780   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.544790   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:54.544797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:54.544884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:54.581407   73230 cri.go:89] found id: ""
	I0906 20:08:54.581436   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.581446   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:54.581453   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:54.581514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:54.618955   73230 cri.go:89] found id: ""
	I0906 20:08:54.618986   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.618998   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:54.619006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:54.619065   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:54.656197   73230 cri.go:89] found id: ""
	I0906 20:08:54.656229   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.656248   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:54.656255   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:54.656316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:54.697499   73230 cri.go:89] found id: ""
	I0906 20:08:54.697536   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.697544   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:54.697549   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:54.697600   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:54.734284   73230 cri.go:89] found id: ""
	I0906 20:08:54.734313   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.734331   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:54.734342   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:54.734356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:54.811079   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:54.811100   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:54.811111   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:54.887309   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:54.887346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.930465   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:54.930499   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:55.000240   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:55.000303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:54.339076   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:54.833352   72867 pod_ready.go:82] duration metric: took 4m0.000854511s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:54.833398   72867 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:54.833423   72867 pod_ready.go:39] duration metric: took 4m14.79685184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:54.833458   72867 kubeadm.go:597] duration metric: took 4m22.254900492s to restartPrimaryControlPlane
	W0906 20:08:54.833525   72867 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:54.833576   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:54.192038   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:56.192120   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:58.193505   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:57.530956   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:57.544056   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:57.544136   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:57.584492   73230 cri.go:89] found id: ""
	I0906 20:08:57.584519   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.584528   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:57.584534   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:57.584585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:57.620220   73230 cri.go:89] found id: ""
	I0906 20:08:57.620250   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.620259   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:57.620265   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:57.620321   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:57.655245   73230 cri.go:89] found id: ""
	I0906 20:08:57.655268   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.655283   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:57.655288   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:57.655346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:57.690439   73230 cri.go:89] found id: ""
	I0906 20:08:57.690470   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.690481   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:57.690487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:57.690551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:57.728179   73230 cri.go:89] found id: ""
	I0906 20:08:57.728206   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.728214   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:57.728221   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:57.728270   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:57.763723   73230 cri.go:89] found id: ""
	I0906 20:08:57.763752   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.763761   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:57.763767   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:57.763825   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:57.799836   73230 cri.go:89] found id: ""
	I0906 20:08:57.799861   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.799869   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:57.799876   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:57.799922   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:57.834618   73230 cri.go:89] found id: ""
	I0906 20:08:57.834644   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.834651   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:57.834660   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:57.834671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:57.887297   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:57.887331   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:57.901690   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:57.901717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:57.969179   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:57.969209   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:57.969223   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:58.052527   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:58.052642   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:58.870446   72441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.219876198s)
	I0906 20:08:58.870530   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:08:58.888197   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:08:58.899185   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:08:58.909740   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:08:58.909762   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:08:58.909806   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:08:58.919589   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:08:58.919646   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:08:58.930386   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:08:58.940542   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:08:58.940621   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:08:58.951673   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.963471   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:08:58.963545   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.974638   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:08:58.984780   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:08:58.984843   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:08:58.995803   72441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:08:59.046470   72441 kubeadm.go:310] W0906 20:08:59.003226    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.047297   72441 kubeadm.go:310] W0906 20:08:59.004193    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.166500   72441 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:00.691499   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:02.692107   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:00.593665   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:00.608325   73230 kubeadm.go:597] duration metric: took 4m4.153407014s to restartPrimaryControlPlane
	W0906 20:09:00.608399   73230 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:00.608428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:05.878028   73230 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.269561172s)
	I0906 20:09:05.878112   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:05.893351   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:05.904668   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:05.915560   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:05.915583   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:05.915633   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:09:05.926566   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:05.926625   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:05.937104   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:09:05.946406   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:05.946467   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:05.956203   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.965691   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:05.965751   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.976210   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:09:05.986104   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:05.986174   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:05.996282   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:06.068412   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:09:06.068507   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:06.213882   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:06.214044   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:06.214191   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:09:06.406793   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.067295   72441 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:07.067370   72441 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:07.067449   72441 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:07.067595   72441 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:07.067737   72441 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:07.067795   72441 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.069381   72441 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:07.069477   72441 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:07.069559   72441 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:07.069652   72441 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:07.069733   72441 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:07.069825   72441 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:07.069898   72441 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:07.069981   72441 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:07.070068   72441 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:07.070178   72441 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:07.070279   72441 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:07.070349   72441 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:07.070424   72441 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:07.070494   72441 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:07.070592   72441 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:07.070669   72441 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.070755   72441 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.070828   72441 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.070916   72441 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.070972   72441 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:07.072214   72441 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.072317   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.072399   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.072487   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.072613   72441 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.072685   72441 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.072719   72441 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.072837   72441 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:07.072977   72441 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:07.073063   72441 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.515053ms
	I0906 20:09:07.073178   72441 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:07.073257   72441 kubeadm.go:310] [api-check] The API server is healthy after 5.001748851s
	I0906 20:09:07.073410   72441 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:07.073558   72441 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:07.073650   72441 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:07.073860   72441 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-458066 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:07.073936   72441 kubeadm.go:310] [bootstrap-token] Using token: 3t2lf6.w44vkc4kfppuo2gp
	I0906 20:09:07.075394   72441 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:07.075524   72441 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:07.075621   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:07.075738   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:07.075905   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:07.076003   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:07.076094   72441 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:07.076222   72441 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:07.076397   72441 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:07.076486   72441 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:07.076502   72441 kubeadm.go:310] 
	I0906 20:09:07.076579   72441 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:07.076594   72441 kubeadm.go:310] 
	I0906 20:09:07.076687   72441 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:07.076698   72441 kubeadm.go:310] 
	I0906 20:09:07.076727   72441 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:07.076810   72441 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:07.076893   72441 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:07.076900   72441 kubeadm.go:310] 
	I0906 20:09:07.077016   72441 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:07.077029   72441 kubeadm.go:310] 
	I0906 20:09:07.077090   72441 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:07.077105   72441 kubeadm.go:310] 
	I0906 20:09:07.077172   72441 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:07.077273   72441 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:07.077368   72441 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:07.077377   72441 kubeadm.go:310] 
	I0906 20:09:07.077496   72441 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:07.077589   72441 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:07.077600   72441 kubeadm.go:310] 
	I0906 20:09:07.077680   72441 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.077767   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:07.077807   72441 kubeadm.go:310] 	--control-plane 
	I0906 20:09:07.077817   72441 kubeadm.go:310] 
	I0906 20:09:07.077927   72441 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:07.077946   72441 kubeadm.go:310] 
	I0906 20:09:07.078053   72441 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.078191   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:07.078206   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:09:07.078216   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:07.079782   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:07.080965   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:07.092500   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:07.112546   72441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:07.112618   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:07.112648   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-458066 minikube.k8s.io/updated_at=2024_09_06T20_09_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=embed-certs-458066 minikube.k8s.io/primary=true
	I0906 20:09:07.343125   72441 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:07.343284   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:06.408933   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:06.409043   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:06.409126   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:06.409242   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:06.409351   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:06.409445   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:06.409559   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:06.409666   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:06.409758   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:06.409870   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:06.409964   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:06.410010   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:06.410101   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:06.721268   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:06.888472   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.414908   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.505887   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.525704   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.525835   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.525913   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.699971   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:04.692422   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.193312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.701970   73230 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.702095   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.708470   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.710216   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.711016   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.714706   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:09:07.844097   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.344174   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.843884   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.343591   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.843748   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.344148   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.844002   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.343424   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.444023   72441 kubeadm.go:1113] duration metric: took 4.331471016s to wait for elevateKubeSystemPrivileges
	I0906 20:09:11.444067   72441 kubeadm.go:394] duration metric: took 4m58.815096997s to StartCluster
	I0906 20:09:11.444093   72441 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.444186   72441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:11.446093   72441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.446360   72441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:11.446430   72441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:11.446521   72441 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-458066"
	I0906 20:09:11.446542   72441 addons.go:69] Setting default-storageclass=true in profile "embed-certs-458066"
	I0906 20:09:11.446560   72441 addons.go:69] Setting metrics-server=true in profile "embed-certs-458066"
	I0906 20:09:11.446609   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:11.446615   72441 addons.go:234] Setting addon metrics-server=true in "embed-certs-458066"
	W0906 20:09:11.446663   72441 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:11.446694   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.446576   72441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-458066"
	I0906 20:09:11.446570   72441 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-458066"
	W0906 20:09:11.446779   72441 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:11.446810   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.447077   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447112   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447170   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447211   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447350   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447426   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447879   72441 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:11.449461   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:11.463673   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I0906 20:09:11.463676   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0906 20:09:11.464129   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464231   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464669   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464691   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.464675   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464745   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.465097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465139   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465608   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465634   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.465731   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465778   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.466622   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0906 20:09:11.466967   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.467351   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.467366   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.467622   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.467759   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.471093   72441 addons.go:234] Setting addon default-storageclass=true in "embed-certs-458066"
	W0906 20:09:11.471115   72441 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:11.471145   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.471524   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.471543   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.488980   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0906 20:09:11.489014   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0906 20:09:11.489399   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0906 20:09:11.489465   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489517   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489908   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.490116   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490134   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490144   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490158   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490411   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490427   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490481   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490872   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490886   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.491406   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.491500   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.491520   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.491619   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.493485   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.493901   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.495272   72441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:11.495274   72441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:11.496553   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:11.496575   72441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:11.496597   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.496647   72441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.496667   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:11.496684   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.500389   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500395   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500469   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.500786   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500808   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500952   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501105   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.501145   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501259   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501305   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.501389   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501501   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.510188   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0906 20:09:11.510617   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.511142   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.511169   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.511539   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.511754   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.513207   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.513439   72441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.513455   72441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:11.513474   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.516791   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517292   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.517323   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517563   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.517898   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.518085   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.518261   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.669057   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:11.705086   72441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731651   72441 node_ready.go:49] node "embed-certs-458066" has status "Ready":"True"
	I0906 20:09:11.731679   72441 node_ready.go:38] duration metric: took 26.546983ms for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731691   72441 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:11.740680   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:11.767740   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:11.767760   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:11.771571   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.804408   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:11.804435   72441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:11.844160   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.856217   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:11.856240   72441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:11.899134   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:13.159543   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.315345353s)
	I0906 20:09:13.159546   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387931315s)
	I0906 20:09:13.159639   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159660   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159601   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159711   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.159985   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.159997   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160008   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160018   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160080   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160095   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160104   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160115   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160265   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160289   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160401   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160417   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185478   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.185512   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.185914   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.185934   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185949   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.228561   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.329382232s)
	I0906 20:09:13.228621   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.228636   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228924   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.228978   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.228991   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.229001   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.229229   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.229258   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.229270   72441 addons.go:475] Verifying addon metrics-server=true in "embed-certs-458066"
	I0906 20:09:13.230827   72441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:09.691281   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:11.692514   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:13.231988   72441 addons.go:510] duration metric: took 1.785558897s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:13.750043   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.247314   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.748039   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:16.748064   72441 pod_ready.go:82] duration metric: took 5.007352361s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:16.748073   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:14.192167   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.691856   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:18.754580   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:19.254643   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:19.254669   72441 pod_ready.go:82] duration metric: took 2.506589666s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:19.254680   72441 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762162   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.762188   72441 pod_ready.go:82] duration metric: took 1.507501384s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762202   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770835   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.770860   72441 pod_ready.go:82] duration metric: took 8.65029ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770872   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779692   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.779713   72441 pod_ready.go:82] duration metric: took 8.832607ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779725   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786119   72441 pod_ready.go:93] pod "kube-proxy-rzx2f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.786146   72441 pod_ready.go:82] duration metric: took 6.414063ms for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786158   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852593   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.852630   72441 pod_ready.go:82] duration metric: took 66.461213ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852642   72441 pod_ready.go:39] duration metric: took 9.120937234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:20.852663   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:20.852729   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:20.871881   72441 api_server.go:72] duration metric: took 9.425481233s to wait for apiserver process to appear ...
	I0906 20:09:20.871911   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:20.871927   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:09:20.876997   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:09:20.878290   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:20.878314   72441 api_server.go:131] duration metric: took 6.396943ms to wait for apiserver health ...
	I0906 20:09:20.878324   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:21.057265   72441 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:21.057303   72441 system_pods.go:61] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.057312   72441 system_pods.go:61] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.057319   72441 system_pods.go:61] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.057326   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.057332   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.057338   72441 system_pods.go:61] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.057345   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.057356   72441 system_pods.go:61] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.057367   72441 system_pods.go:61] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.057381   72441 system_pods.go:74] duration metric: took 179.050809ms to wait for pod list to return data ...
	I0906 20:09:21.057394   72441 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:21.252816   72441 default_sa.go:45] found service account: "default"
	I0906 20:09:21.252842   72441 default_sa.go:55] duration metric: took 195.436403ms for default service account to be created ...
	I0906 20:09:21.252851   72441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:21.455714   72441 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:21.455742   72441 system_pods.go:89] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.455748   72441 system_pods.go:89] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.455752   72441 system_pods.go:89] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.455755   72441 system_pods.go:89] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.455759   72441 system_pods.go:89] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.455763   72441 system_pods.go:89] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.455766   72441 system_pods.go:89] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.455772   72441 system_pods.go:89] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.455776   72441 system_pods.go:89] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.455784   72441 system_pods.go:126] duration metric: took 202.909491ms to wait for k8s-apps to be running ...
	I0906 20:09:21.455791   72441 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:21.455832   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.474124   72441 system_svc.go:56] duration metric: took 18.325386ms WaitForService to wait for kubelet
	I0906 20:09:21.474150   72441 kubeadm.go:582] duration metric: took 10.027757317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:21.474172   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:21.653674   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:21.653697   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:21.653708   72441 node_conditions.go:105] duration metric: took 179.531797ms to run NodePressure ...
	I0906 20:09:21.653718   72441 start.go:241] waiting for startup goroutines ...
	I0906 20:09:21.653727   72441 start.go:246] waiting for cluster config update ...
	I0906 20:09:21.653740   72441 start.go:255] writing updated cluster config ...
	I0906 20:09:21.654014   72441 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:21.703909   72441 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:21.705502   72441 out.go:177] * Done! kubectl is now configured to use "embed-certs-458066" cluster and "default" namespace by default
	I0906 20:09:21.102986   72867 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.269383553s)
	I0906 20:09:21.103094   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.118935   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:21.129099   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:21.139304   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:21.139326   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:21.139374   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:09:21.149234   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:21.149289   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:21.160067   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:09:21.169584   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:21.169664   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:21.179885   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.190994   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:21.191062   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.201649   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:09:21.211165   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:21.211223   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:21.220998   72867 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:21.269780   72867 kubeadm.go:310] W0906 20:09:21.240800    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.270353   72867 kubeadm.go:310] W0906 20:09:21.241533    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.389445   72867 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:18.692475   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:21.193075   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:23.697031   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:26.191208   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:28.192166   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:30.493468   72867 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:30.493543   72867 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:30.493620   72867 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:30.493751   72867 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:30.493891   72867 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:30.493971   72867 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:30.495375   72867 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:30.495467   72867 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:30.495537   72867 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:30.495828   72867 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:30.495913   72867 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:30.495977   72867 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:30.496024   72867 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:30.496112   72867 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:30.496207   72867 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:30.496308   72867 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:30.496400   72867 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:30.496452   72867 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:30.496519   72867 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:30.496601   72867 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:30.496690   72867 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:30.496774   72867 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:30.496887   72867 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:30.496946   72867 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:30.497018   72867 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:30.497074   72867 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:30.498387   72867 out.go:235]   - Booting up control plane ...
	I0906 20:09:30.498472   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:30.498550   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:30.498616   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:30.498715   72867 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:30.498786   72867 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:30.498821   72867 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:30.498969   72867 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:30.499076   72867 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:30.499126   72867 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.325552ms
	I0906 20:09:30.499189   72867 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:30.499269   72867 kubeadm.go:310] [api-check] The API server is healthy after 5.002261512s
	I0906 20:09:30.499393   72867 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:30.499507   72867 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:30.499586   72867 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:30.499818   72867 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:30.499915   72867 kubeadm.go:310] [bootstrap-token] Using token: 6yha4r.f9kcjkhkq2u0pp1e
	I0906 20:09:30.501217   72867 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:30.501333   72867 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:30.501438   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:30.501630   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:30.501749   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:30.501837   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:30.501904   72867 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:30.501996   72867 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:30.502032   72867 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:30.502085   72867 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:30.502093   72867 kubeadm.go:310] 
	I0906 20:09:30.502153   72867 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:30.502166   72867 kubeadm.go:310] 
	I0906 20:09:30.502242   72867 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:30.502257   72867 kubeadm.go:310] 
	I0906 20:09:30.502290   72867 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:30.502358   72867 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:30.502425   72867 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:30.502433   72867 kubeadm.go:310] 
	I0906 20:09:30.502486   72867 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:30.502494   72867 kubeadm.go:310] 
	I0906 20:09:30.502529   72867 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:30.502536   72867 kubeadm.go:310] 
	I0906 20:09:30.502575   72867 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:30.502633   72867 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:30.502706   72867 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:30.502720   72867 kubeadm.go:310] 
	I0906 20:09:30.502791   72867 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:30.502882   72867 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:30.502893   72867 kubeadm.go:310] 
	I0906 20:09:30.502982   72867 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503099   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:30.503120   72867 kubeadm.go:310] 	--control-plane 
	I0906 20:09:30.503125   72867 kubeadm.go:310] 
	I0906 20:09:30.503240   72867 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:30.503247   72867 kubeadm.go:310] 
	I0906 20:09:30.503312   72867 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503406   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:30.503416   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:09:30.503424   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:30.504880   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:30.505997   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:30.517864   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:30.539641   72867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:30.539731   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653828 minikube.k8s.io/updated_at=2024_09_06T20_09_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=default-k8s-diff-port-653828 minikube.k8s.io/primary=true
	I0906 20:09:30.539732   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.576812   72867 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:30.742163   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.242299   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.742502   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.192201   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.691488   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.242418   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:32.742424   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.242317   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.742587   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.242563   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.342481   72867 kubeadm.go:1113] duration metric: took 3.802829263s to wait for elevateKubeSystemPrivileges
	I0906 20:09:34.342520   72867 kubeadm.go:394] duration metric: took 5m1.826839653s to StartCluster
	I0906 20:09:34.342542   72867 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.342640   72867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:34.345048   72867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.345461   72867 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:34.345576   72867 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:34.345655   72867 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345691   72867 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653828"
	I0906 20:09:34.345696   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:34.345699   72867 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345712   72867 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345737   72867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653828"
	W0906 20:09:34.345703   72867 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:34.345752   72867 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.345762   72867 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:34.345779   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.345795   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.346102   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346136   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346174   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346195   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346231   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346201   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.347895   72867 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:34.349535   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:34.363021   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0906 20:09:34.363492   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.364037   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.364062   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.364463   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.365147   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.365186   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.365991   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
	I0906 20:09:34.366024   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0906 20:09:34.366472   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366512   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366953   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.366970   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367086   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.367113   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367494   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367642   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367988   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.368011   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.368282   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.375406   72867 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.375432   72867 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:34.375460   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.375825   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.375858   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.382554   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0906 20:09:34.383102   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.383600   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.383616   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.383938   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.384214   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.385829   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.387409   72867 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:34.388348   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:34.388366   72867 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:34.388381   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.392542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.392813   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.392828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.393018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.393068   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0906 20:09:34.393374   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.393439   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.393550   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.393686   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.394089   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.394116   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.394464   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.394651   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.396559   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.396712   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0906 20:09:34.397142   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.397646   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.397669   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.397929   72867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:34.398023   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.398468   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.398511   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.399007   72867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.399024   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:34.399043   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.405024   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405057   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.405081   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405287   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.405479   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.405634   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.405752   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.414779   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0906 20:09:34.415230   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.415662   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.415679   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.415993   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.416151   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.417818   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.418015   72867 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.418028   72867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:34.418045   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.421303   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421379   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.421399   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421645   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.421815   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.421979   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.422096   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.582923   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:34.600692   72867 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617429   72867 node_ready.go:49] node "default-k8s-diff-port-653828" has status "Ready":"True"
	I0906 20:09:34.617454   72867 node_ready.go:38] duration metric: took 16.723446ms for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617465   72867 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:34.632501   72867 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:34.679561   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.682999   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.746380   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:34.746406   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:34.876650   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:34.876680   72867 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:34.935388   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:34.935415   72867 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:35.092289   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:35.709257   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02965114s)
	I0906 20:09:35.709297   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026263795s)
	I0906 20:09:35.709352   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709373   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709319   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709398   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709810   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.709911   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709898   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709926   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.709954   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709962   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709876   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710029   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710047   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.710065   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.710226   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710238   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710636   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.710665   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710681   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754431   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.754458   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.754765   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.754781   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754821   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.181191   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:36.181219   72867 pod_ready.go:82] duration metric: took 1.54868366s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.181233   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.351617   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.259284594s)
	I0906 20:09:36.351684   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.351701   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.351992   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352078   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352100   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.352111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.352055   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352402   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352914   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352934   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352945   72867 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653828"
	I0906 20:09:36.354972   72867 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:36.356127   72867 addons.go:510] duration metric: took 2.010554769s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:34.695700   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:37.193366   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:38.187115   72867 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:39.188966   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:39.188998   72867 pod_ready.go:82] duration metric: took 3.007757042s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:39.189012   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:41.196228   72867 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.206614   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.206636   72867 pod_ready.go:82] duration metric: took 3.017616218s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.206647   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212140   72867 pod_ready.go:93] pod "kube-proxy-7846f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.212165   72867 pod_ready.go:82] duration metric: took 5.512697ms for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212174   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217505   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.217527   72867 pod_ready.go:82] duration metric: took 5.346748ms for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217534   72867 pod_ready.go:39] duration metric: took 7.600058293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:42.217549   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:42.217600   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:42.235961   72867 api_server.go:72] duration metric: took 7.890460166s to wait for apiserver process to appear ...
	I0906 20:09:42.235987   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:42.236003   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:09:42.240924   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:09:42.241889   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:42.241912   72867 api_server.go:131] duration metric: took 5.919055ms to wait for apiserver health ...
	I0906 20:09:42.241922   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:42.247793   72867 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:42.247825   72867 system_pods.go:61] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.247833   72867 system_pods.go:61] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.247839   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.247845   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.247852   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.247857   72867 system_pods.go:61] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.247861   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.247866   72867 system_pods.go:61] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.247873   72867 system_pods.go:61] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.247883   72867 system_pods.go:74] duration metric: took 5.95413ms to wait for pod list to return data ...
	I0906 20:09:42.247893   72867 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:42.251260   72867 default_sa.go:45] found service account: "default"
	I0906 20:09:42.251277   72867 default_sa.go:55] duration metric: took 3.3795ms for default service account to be created ...
	I0906 20:09:42.251284   72867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:42.256204   72867 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:42.256228   72867 system_pods.go:89] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.256233   72867 system_pods.go:89] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.256237   72867 system_pods.go:89] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.256241   72867 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.256245   72867 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.256249   72867 system_pods.go:89] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.256252   72867 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.256258   72867 system_pods.go:89] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.256261   72867 system_pods.go:89] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.256270   72867 system_pods.go:126] duration metric: took 4.981383ms to wait for k8s-apps to be running ...
	I0906 20:09:42.256278   72867 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:42.256323   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:42.272016   72867 system_svc.go:56] duration metric: took 15.727796ms WaitForService to wait for kubelet
	I0906 20:09:42.272050   72867 kubeadm.go:582] duration metric: took 7.926551396s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:42.272081   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:42.275486   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:42.275516   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:42.275527   72867 node_conditions.go:105] duration metric: took 3.439966ms to run NodePressure ...
	I0906 20:09:42.275540   72867 start.go:241] waiting for startup goroutines ...
	I0906 20:09:42.275548   72867 start.go:246] waiting for cluster config update ...
	I0906 20:09:42.275561   72867 start.go:255] writing updated cluster config ...
	I0906 20:09:42.275823   72867 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:42.326049   72867 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:42.328034   72867 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653828" cluster and "default" namespace by default
	I0906 20:09:39.692393   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.192176   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:44.691934   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:45.185317   72322 pod_ready.go:82] duration metric: took 4m0.000138495s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	E0906 20:09:45.185352   72322 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:09:45.185371   72322 pod_ready.go:39] duration metric: took 4m12.222584677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:45.185403   72322 kubeadm.go:597] duration metric: took 4m20.152442555s to restartPrimaryControlPlane
	W0906 20:09:45.185466   72322 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:45.185496   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:47.714239   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:09:47.714464   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:47.714711   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:09:52.715187   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:52.715391   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:02.716155   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:02.716424   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:11.446625   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261097398s)
	I0906 20:10:11.446717   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:11.472899   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:10:11.492643   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:10:11.509855   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:10:11.509878   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:10:11.509933   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:10:11.523039   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:10:11.523099   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:10:11.540484   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:10:11.560246   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:10:11.560323   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:10:11.585105   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.596067   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:10:11.596138   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.607049   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:10:11.616982   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:10:11.617058   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:10:11.627880   72322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:10:11.672079   72322 kubeadm.go:310] W0906 20:10:11.645236    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.672935   72322 kubeadm.go:310] W0906 20:10:11.646151    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.789722   72322 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:10:20.270339   72322 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:10:20.270450   72322 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:10:20.270551   72322 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:10:20.270697   72322 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:10:20.270837   72322 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:10:20.270932   72322 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:10:20.272324   72322 out.go:235]   - Generating certificates and keys ...
	I0906 20:10:20.272437   72322 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:10:20.272530   72322 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:10:20.272634   72322 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:10:20.272732   72322 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:10:20.272842   72322 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:10:20.272950   72322 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:10:20.273051   72322 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:10:20.273135   72322 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:10:20.273272   72322 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:10:20.273361   72322 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:10:20.273400   72322 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:10:20.273456   72322 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:10:20.273517   72322 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:10:20.273571   72322 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:10:20.273625   72322 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:10:20.273682   72322 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:10:20.273731   72322 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:10:20.273801   72322 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:10:20.273856   72322 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:10:20.275359   72322 out.go:235]   - Booting up control plane ...
	I0906 20:10:20.275466   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:10:20.275539   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:10:20.275595   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:10:20.275692   72322 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:10:20.275774   72322 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:10:20.275812   72322 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:10:20.275917   72322 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:10:20.276005   72322 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:10:20.276063   72322 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001365031s
	I0906 20:10:20.276127   72322 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:10:20.276189   72322 kubeadm.go:310] [api-check] The API server is healthy after 5.002810387s
	I0906 20:10:20.276275   72322 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:10:20.276410   72322 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:10:20.276480   72322 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:10:20.276639   72322 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-504385 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:10:20.276690   72322 kubeadm.go:310] [bootstrap-token] Using token: fv12w2.cc6vcthx5yn6r6ru
	I0906 20:10:20.277786   72322 out.go:235]   - Configuring RBAC rules ...
	I0906 20:10:20.277872   72322 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:10:20.277941   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:10:20.278082   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:10:20.278231   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:10:20.278351   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:10:20.278426   72322 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:10:20.278541   72322 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:10:20.278614   72322 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:10:20.278692   72322 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:10:20.278700   72322 kubeadm.go:310] 
	I0906 20:10:20.278780   72322 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:10:20.278790   72322 kubeadm.go:310] 
	I0906 20:10:20.278880   72322 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:10:20.278889   72322 kubeadm.go:310] 
	I0906 20:10:20.278932   72322 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:10:20.279023   72322 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:10:20.279079   72322 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:10:20.279086   72322 kubeadm.go:310] 
	I0906 20:10:20.279141   72322 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:10:20.279148   72322 kubeadm.go:310] 
	I0906 20:10:20.279186   72322 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:10:20.279195   72322 kubeadm.go:310] 
	I0906 20:10:20.279291   72322 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:10:20.279420   72322 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:10:20.279524   72322 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:10:20.279535   72322 kubeadm.go:310] 
	I0906 20:10:20.279647   72322 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:10:20.279756   72322 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:10:20.279767   72322 kubeadm.go:310] 
	I0906 20:10:20.279896   72322 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280043   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:10:20.280080   72322 kubeadm.go:310] 	--control-plane 
	I0906 20:10:20.280090   72322 kubeadm.go:310] 
	I0906 20:10:20.280230   72322 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:10:20.280258   72322 kubeadm.go:310] 
	I0906 20:10:20.280365   72322 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280514   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:10:20.280532   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:10:20.280541   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:10:20.282066   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:10:20.283228   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:10:20.294745   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:10:20.317015   72322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-504385 minikube.k8s.io/updated_at=2024_09_06T20_10_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=no-preload-504385 minikube.k8s.io/primary=true
	I0906 20:10:20.528654   72322 ops.go:34] apiserver oom_adj: -16
	I0906 20:10:20.528681   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.029394   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.528922   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.029667   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.528814   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.029163   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.529709   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.029277   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.529466   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.668636   72322 kubeadm.go:1113] duration metric: took 4.351557657s to wait for elevateKubeSystemPrivileges
	I0906 20:10:24.668669   72322 kubeadm.go:394] duration metric: took 4m59.692142044s to StartCluster
	I0906 20:10:24.668690   72322 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.668775   72322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:10:24.670483   72322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.670765   72322 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:10:24.670874   72322 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:10:24.670975   72322 addons.go:69] Setting storage-provisioner=true in profile "no-preload-504385"
	I0906 20:10:24.670990   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:10:24.671015   72322 addons.go:234] Setting addon storage-provisioner=true in "no-preload-504385"
	W0906 20:10:24.671027   72322 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:10:24.670988   72322 addons.go:69] Setting default-storageclass=true in profile "no-preload-504385"
	I0906 20:10:24.671020   72322 addons.go:69] Setting metrics-server=true in profile "no-preload-504385"
	I0906 20:10:24.671053   72322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-504385"
	I0906 20:10:24.671069   72322 addons.go:234] Setting addon metrics-server=true in "no-preload-504385"
	I0906 20:10:24.671057   72322 host.go:66] Checking if "no-preload-504385" exists ...
	W0906 20:10:24.671080   72322 addons.go:243] addon metrics-server should already be in state true
	I0906 20:10:24.671112   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.671387   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671413   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671433   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671462   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671476   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671509   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.672599   72322 out.go:177] * Verifying Kubernetes components...
	I0906 20:10:24.674189   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:10:24.688494   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0906 20:10:24.689082   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.689564   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.689586   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.690020   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.690242   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.691753   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0906 20:10:24.691758   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0906 20:10:24.692223   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692314   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692744   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692761   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.692892   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692912   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.693162   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693498   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693821   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.693851   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694035   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694067   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694118   72322 addons.go:234] Setting addon default-storageclass=true in "no-preload-504385"
	W0906 20:10:24.694133   72322 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:10:24.694159   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.694503   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694533   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.710695   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0906 20:10:24.712123   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.712820   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.712844   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.713265   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.713488   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.714238   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0906 20:10:24.714448   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0906 20:10:24.714584   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.714801   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.715454   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715472   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715517   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.715631   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715643   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715961   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716468   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716527   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.717120   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.717170   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.717534   72322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:10:24.718838   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.719392   72322 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:24.719413   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:10:24.719435   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.720748   72322 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:10:22.717567   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:22.717827   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:24.722045   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:10:24.722066   72322 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:10:24.722084   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.722722   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723383   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.723408   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723545   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.723788   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.723970   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.724133   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.725538   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.725987   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.726006   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.726137   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.726317   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.726499   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.726629   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.734236   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0906 20:10:24.734597   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.735057   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.735069   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.735479   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.735612   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.737446   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.737630   72322 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:24.737647   72322 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:10:24.737658   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.740629   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741040   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.741063   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741251   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.741418   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.741530   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.741659   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.903190   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:10:24.944044   72322 node_ready.go:35] waiting up to 6m0s for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960395   72322 node_ready.go:49] node "no-preload-504385" has status "Ready":"True"
	I0906 20:10:24.960436   72322 node_ready.go:38] duration metric: took 16.357022ms for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960453   72322 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:24.981153   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:25.103072   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:25.113814   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:10:25.113843   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:10:25.123206   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:25.209178   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:10:25.209208   72322 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:10:25.255577   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.255604   72322 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:10:25.297179   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.336592   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336615   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.336915   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.336930   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.336938   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336945   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.337164   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.337178   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.350330   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.350356   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.350630   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.350648   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850349   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850377   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850688   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.850707   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850717   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850725   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850974   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.851012   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.033886   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.033918   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034215   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034221   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034241   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034250   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.034258   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034525   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034533   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034579   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034593   72322 addons.go:475] Verifying addon metrics-server=true in "no-preload-504385"
	I0906 20:10:26.036358   72322 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0906 20:10:26.037927   72322 addons.go:510] duration metric: took 1.367055829s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0906 20:10:26.989945   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:28.987386   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:28.987407   72322 pod_ready.go:82] duration metric: took 4.006228588s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:28.987419   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:30.994020   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:32.999308   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:32.999332   72322 pod_ready.go:82] duration metric: took 4.01190401s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:32.999344   72322 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005872   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.005898   72322 pod_ready.go:82] duration metric: took 1.006546878s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005908   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010279   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.010306   72322 pod_ready.go:82] duration metric: took 4.391154ms for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010315   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014331   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.014346   72322 pod_ready.go:82] duration metric: took 4.025331ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014354   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018361   72322 pod_ready.go:93] pod "kube-proxy-48s2x" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.018378   72322 pod_ready.go:82] duration metric: took 4.018525ms for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018386   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191606   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.191630   72322 pod_ready.go:82] duration metric: took 173.23777ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191638   72322 pod_ready.go:39] duration metric: took 9.231173272s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:34.191652   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:10:34.191738   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:10:34.207858   72322 api_server.go:72] duration metric: took 9.537052258s to wait for apiserver process to appear ...
	I0906 20:10:34.207883   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:10:34.207904   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:10:34.214477   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:10:34.216178   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:10:34.216211   72322 api_server.go:131] duration metric: took 8.319856ms to wait for apiserver health ...
	I0906 20:10:34.216221   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:10:34.396409   72322 system_pods.go:59] 9 kube-system pods found
	I0906 20:10:34.396443   72322 system_pods.go:61] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.396451   72322 system_pods.go:61] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.396456   72322 system_pods.go:61] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.396461   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.396468   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.396472   72322 system_pods.go:61] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.396477   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.396487   72322 system_pods.go:61] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.396502   72322 system_pods.go:61] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.396514   72322 system_pods.go:74] duration metric: took 180.284785ms to wait for pod list to return data ...
	I0906 20:10:34.396526   72322 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:10:34.592160   72322 default_sa.go:45] found service account: "default"
	I0906 20:10:34.592186   72322 default_sa.go:55] duration metric: took 195.651674ms for default service account to be created ...
	I0906 20:10:34.592197   72322 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:10:34.795179   72322 system_pods.go:86] 9 kube-system pods found
	I0906 20:10:34.795210   72322 system_pods.go:89] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.795217   72322 system_pods.go:89] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.795221   72322 system_pods.go:89] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.795224   72322 system_pods.go:89] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.795228   72322 system_pods.go:89] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.795232   72322 system_pods.go:89] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.795238   72322 system_pods.go:89] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.795244   72322 system_pods.go:89] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.795249   72322 system_pods.go:89] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.795258   72322 system_pods.go:126] duration metric: took 203.05524ms to wait for k8s-apps to be running ...
	I0906 20:10:34.795270   72322 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:10:34.795328   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:34.810406   72322 system_svc.go:56] duration metric: took 15.127486ms WaitForService to wait for kubelet
	I0906 20:10:34.810437   72322 kubeadm.go:582] duration metric: took 10.13963577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:10:34.810461   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:10:34.993045   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:10:34.993077   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:10:34.993092   72322 node_conditions.go:105] duration metric: took 182.626456ms to run NodePressure ...
	I0906 20:10:34.993105   72322 start.go:241] waiting for startup goroutines ...
	I0906 20:10:34.993112   72322 start.go:246] waiting for cluster config update ...
	I0906 20:10:34.993122   72322 start.go:255] writing updated cluster config ...
	I0906 20:10:34.993401   72322 ssh_runner.go:195] Run: rm -f paused
	I0906 20:10:35.043039   72322 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:10:35.045782   72322 out.go:177] * Done! kubectl is now configured to use "no-preload-504385" cluster and "default" namespace by default
	I0906 20:11:02.719781   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:02.720062   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:02.720077   73230 kubeadm.go:310] 
	I0906 20:11:02.720125   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:11:02.720177   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:11:02.720189   73230 kubeadm.go:310] 
	I0906 20:11:02.720246   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:11:02.720290   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:11:02.720443   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:11:02.720469   73230 kubeadm.go:310] 
	I0906 20:11:02.720593   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:11:02.720665   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:11:02.720722   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:11:02.720746   73230 kubeadm.go:310] 
	I0906 20:11:02.720900   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:11:02.721018   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:11:02.721028   73230 kubeadm.go:310] 
	I0906 20:11:02.721180   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:11:02.721311   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:11:02.721405   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:11:02.721500   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:11:02.721512   73230 kubeadm.go:310] 
	I0906 20:11:02.722088   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:11:02.722199   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:11:02.722310   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0906 20:11:02.722419   73230 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 20:11:02.722469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:11:03.188091   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:11:03.204943   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:11:03.215434   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:11:03.215458   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:11:03.215506   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:11:03.225650   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:11:03.225713   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:11:03.236252   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:11:03.245425   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:11:03.245489   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:11:03.255564   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.264932   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:11:03.265014   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.274896   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:11:03.284027   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:11:03.284092   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:11:03.294368   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:11:03.377411   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:11:03.377509   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:11:03.537331   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:11:03.537590   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:11:03.537722   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:11:03.728458   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:11:03.730508   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:11:03.730621   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:11:03.730720   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:11:03.730869   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:11:03.730984   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:11:03.731082   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:11:03.731167   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:11:03.731258   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:11:03.731555   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:11:03.731896   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:11:03.732663   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:11:03.732953   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:11:03.733053   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:11:03.839927   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:11:03.988848   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:11:04.077497   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:11:04.213789   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:11:04.236317   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:11:04.237625   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:11:04.237719   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:11:04.399036   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:11:04.400624   73230 out.go:235]   - Booting up control plane ...
	I0906 20:11:04.400709   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:11:04.401417   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:11:04.402751   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:11:04.404122   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:11:04.407817   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:11:44.410273   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:11:44.410884   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:44.411132   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:49.411428   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:49.411674   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:59.412917   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:59.413182   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:19.414487   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:19.414692   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415457   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:59.415729   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415750   73230 kubeadm.go:310] 
	I0906 20:12:59.415808   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:12:59.415864   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:12:59.415874   73230 kubeadm.go:310] 
	I0906 20:12:59.415933   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:12:59.415979   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:12:59.416147   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:12:59.416167   73230 kubeadm.go:310] 
	I0906 20:12:59.416332   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:12:59.416372   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:12:59.416420   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:12:59.416428   73230 kubeadm.go:310] 
	I0906 20:12:59.416542   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:12:59.416650   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:12:59.416659   73230 kubeadm.go:310] 
	I0906 20:12:59.416818   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:12:59.416928   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:12:59.417030   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:12:59.417139   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:12:59.417153   73230 kubeadm.go:310] 
	I0906 20:12:59.417400   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:12:59.417485   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:12:59.417559   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 20:12:59.417626   73230 kubeadm.go:394] duration metric: took 8m3.018298427s to StartCluster
	I0906 20:12:59.417673   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:12:59.417741   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:12:59.464005   73230 cri.go:89] found id: ""
	I0906 20:12:59.464033   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.464040   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:12:59.464045   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:12:59.464101   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:12:59.504218   73230 cri.go:89] found id: ""
	I0906 20:12:59.504252   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.504264   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:12:59.504271   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:12:59.504327   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:12:59.541552   73230 cri.go:89] found id: ""
	I0906 20:12:59.541579   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.541589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:12:59.541596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:12:59.541663   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:12:59.580135   73230 cri.go:89] found id: ""
	I0906 20:12:59.580158   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.580168   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:12:59.580174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:12:59.580220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:12:59.622453   73230 cri.go:89] found id: ""
	I0906 20:12:59.622486   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.622498   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:12:59.622518   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:12:59.622587   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:12:59.661561   73230 cri.go:89] found id: ""
	I0906 20:12:59.661590   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.661601   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:12:59.661608   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:12:59.661668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:12:59.695703   73230 cri.go:89] found id: ""
	I0906 20:12:59.695732   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.695742   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:12:59.695749   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:12:59.695808   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:12:59.739701   73230 cri.go:89] found id: ""
	I0906 20:12:59.739733   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.739744   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:12:59.739756   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:12:59.739771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:12:59.791400   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:12:59.791428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:12:59.851142   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:12:59.851179   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:12:59.867242   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:12:59.867278   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:12:59.941041   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:12:59.941060   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:12:59.941071   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0906 20:13:00.061377   73230 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 20:13:00.061456   73230 out.go:270] * 
	W0906 20:13:00.061515   73230 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.061532   73230 out.go:270] * 
	W0906 20:13:00.062343   73230 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:13:00.065723   73230 out.go:201] 
	W0906 20:13:00.066968   73230 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.067028   73230 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 20:13:00.067059   73230 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 20:13:00.068497   73230 out.go:201] 
	
	
	==> CRI-O <==
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.063250854Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653977063216906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3649a19-c0e8-4668-966a-f9a86e16d680 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.063834409Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cca7678-5bd2-472d-8c78-192642886c6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.064039493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cca7678-5bd2-472d-8c78-192642886c6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.064326381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d,PodSandboxId:2d4fdae5623209a4a9c81bbbadb72d27ef92a9cec8ad8d8baac410a02603ebb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653426408777595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a,PodSandboxId:dec01f5a6cb5ff170f39da6190d9eb3c05e7a4534d47e936bd93819b71fcf7ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426457280757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ffnb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59184ee8-fe9e-479d-b298-0ee9818e4a00,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8,PodSandboxId:a49d2a2d2ae22949d526811d3867714c9769407d2d8bb11ef4e221a26e0aaa4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426293020635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwxzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2
df0b29-0770-447f-8051-fce39e9acff0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df,PodSandboxId:9ac14735d0ad7eab6615b35ba479b621f02f5cb980cef11dcbeb516b0ec1b021,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725653424969666674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48s2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd175211-d965-4b1a-a37a-d1e6df47f09b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f,PodSandboxId:913147acc93fff54b8207751a9fd92d032d6f000344d6d7c0043cfadc44a49cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653414151172082,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123de27c3b8551a9387ecadceaf69150,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b,PodSandboxId:05b21dab03397f23365b83428df6728dfdf4f3f6d2885a76045d9b67fdebff0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653414103380518,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b,PodSandboxId:84e4b3e7daacf7879bdecebb481070297b782baa94efc4fac759568a69bd114f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653414060527569,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bfa0d921a0ce0a55af27a1696709e36,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131,PodSandboxId:65431f984a19ec13097627219fb9430474eb25887961ce21ab24c124cefd3a7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653413999842100,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b99a5d59e524a05b9e2d8501bf6d11,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5378fe314f7a6ccad157d8c7e69480e9a035d341e61e570c71a186f8d71d64,PodSandboxId:3860b04bee19bf7b767c5e11a57b09a688be56266c03bdd875b4842531155254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653127564331448,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8cca7678-5bd2-472d-8c78-192642886c6f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.105395170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27c9616a-9c36-4104-a7d2-a5082d5264ed name=/runtime.v1.RuntimeService/Version
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.105489089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27c9616a-9c36-4104-a7d2-a5082d5264ed name=/runtime.v1.RuntimeService/Version
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.106777941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff440feb-64a1-4ff8-af32-f46bd6efdb85 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.107122280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653977107101192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff440feb-64a1-4ff8-af32-f46bd6efdb85 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.107788036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d34cb92-ce90-4548-ab5d-32f097fb6700 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.107857545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d34cb92-ce90-4548-ab5d-32f097fb6700 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.108076783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d,PodSandboxId:2d4fdae5623209a4a9c81bbbadb72d27ef92a9cec8ad8d8baac410a02603ebb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653426408777595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a,PodSandboxId:dec01f5a6cb5ff170f39da6190d9eb3c05e7a4534d47e936bd93819b71fcf7ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426457280757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ffnb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59184ee8-fe9e-479d-b298-0ee9818e4a00,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8,PodSandboxId:a49d2a2d2ae22949d526811d3867714c9769407d2d8bb11ef4e221a26e0aaa4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426293020635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwxzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2
df0b29-0770-447f-8051-fce39e9acff0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df,PodSandboxId:9ac14735d0ad7eab6615b35ba479b621f02f5cb980cef11dcbeb516b0ec1b021,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725653424969666674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48s2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd175211-d965-4b1a-a37a-d1e6df47f09b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f,PodSandboxId:913147acc93fff54b8207751a9fd92d032d6f000344d6d7c0043cfadc44a49cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653414151172082,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123de27c3b8551a9387ecadceaf69150,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b,PodSandboxId:05b21dab03397f23365b83428df6728dfdf4f3f6d2885a76045d9b67fdebff0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653414103380518,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b,PodSandboxId:84e4b3e7daacf7879bdecebb481070297b782baa94efc4fac759568a69bd114f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653414060527569,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bfa0d921a0ce0a55af27a1696709e36,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131,PodSandboxId:65431f984a19ec13097627219fb9430474eb25887961ce21ab24c124cefd3a7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653413999842100,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b99a5d59e524a05b9e2d8501bf6d11,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5378fe314f7a6ccad157d8c7e69480e9a035d341e61e570c71a186f8d71d64,PodSandboxId:3860b04bee19bf7b767c5e11a57b09a688be56266c03bdd875b4842531155254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653127564331448,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d34cb92-ce90-4548-ab5d-32f097fb6700 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.159783529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb92004f-41a2-4b2c-a059-a4ab7c3a2f61 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.159907229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb92004f-41a2-4b2c-a059-a4ab7c3a2f61 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.161408772Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9e62d01-9e4b-4c48-8452-a9968bf56fd7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.162134411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653977162108877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9e62d01-9e4b-4c48-8452-a9968bf56fd7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.162843562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec462d44-ab50-4723-a0f9-ab7dd5f3d5f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.162935255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec462d44-ab50-4723-a0f9-ab7dd5f3d5f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.163172577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d,PodSandboxId:2d4fdae5623209a4a9c81bbbadb72d27ef92a9cec8ad8d8baac410a02603ebb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653426408777595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a,PodSandboxId:dec01f5a6cb5ff170f39da6190d9eb3c05e7a4534d47e936bd93819b71fcf7ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426457280757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ffnb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59184ee8-fe9e-479d-b298-0ee9818e4a00,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8,PodSandboxId:a49d2a2d2ae22949d526811d3867714c9769407d2d8bb11ef4e221a26e0aaa4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426293020635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwxzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2
df0b29-0770-447f-8051-fce39e9acff0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df,PodSandboxId:9ac14735d0ad7eab6615b35ba479b621f02f5cb980cef11dcbeb516b0ec1b021,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725653424969666674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48s2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd175211-d965-4b1a-a37a-d1e6df47f09b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f,PodSandboxId:913147acc93fff54b8207751a9fd92d032d6f000344d6d7c0043cfadc44a49cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653414151172082,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123de27c3b8551a9387ecadceaf69150,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b,PodSandboxId:05b21dab03397f23365b83428df6728dfdf4f3f6d2885a76045d9b67fdebff0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653414103380518,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b,PodSandboxId:84e4b3e7daacf7879bdecebb481070297b782baa94efc4fac759568a69bd114f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653414060527569,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bfa0d921a0ce0a55af27a1696709e36,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131,PodSandboxId:65431f984a19ec13097627219fb9430474eb25887961ce21ab24c124cefd3a7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653413999842100,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b99a5d59e524a05b9e2d8501bf6d11,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5378fe314f7a6ccad157d8c7e69480e9a035d341e61e570c71a186f8d71d64,PodSandboxId:3860b04bee19bf7b767c5e11a57b09a688be56266c03bdd875b4842531155254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653127564331448,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec462d44-ab50-4723-a0f9-ab7dd5f3d5f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.205999749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6a3233c-a72d-4f72-a799-b36294767f2e name=/runtime.v1.RuntimeService/Version
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.206117887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6a3233c-a72d-4f72-a799-b36294767f2e name=/runtime.v1.RuntimeService/Version
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.208741306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4bf596d-f06c-4d90-9d07-8641201314f7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.209126491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653977209103917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4bf596d-f06c-4d90-9d07-8641201314f7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.209745441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09935b18-4576-432d-8b3d-80670c2ef2a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.209798654Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09935b18-4576-432d-8b3d-80670c2ef2a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:19:37 no-preload-504385 crio[709]: time="2024-09-06 20:19:37.209988470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d,PodSandboxId:2d4fdae5623209a4a9c81bbbadb72d27ef92a9cec8ad8d8baac410a02603ebb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653426408777595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a,PodSandboxId:dec01f5a6cb5ff170f39da6190d9eb3c05e7a4534d47e936bd93819b71fcf7ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426457280757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ffnb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59184ee8-fe9e-479d-b298-0ee9818e4a00,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8,PodSandboxId:a49d2a2d2ae22949d526811d3867714c9769407d2d8bb11ef4e221a26e0aaa4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426293020635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwxzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2
df0b29-0770-447f-8051-fce39e9acff0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df,PodSandboxId:9ac14735d0ad7eab6615b35ba479b621f02f5cb980cef11dcbeb516b0ec1b021,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725653424969666674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48s2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd175211-d965-4b1a-a37a-d1e6df47f09b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f,PodSandboxId:913147acc93fff54b8207751a9fd92d032d6f000344d6d7c0043cfadc44a49cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653414151172082,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123de27c3b8551a9387ecadceaf69150,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b,PodSandboxId:05b21dab03397f23365b83428df6728dfdf4f3f6d2885a76045d9b67fdebff0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653414103380518,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b,PodSandboxId:84e4b3e7daacf7879bdecebb481070297b782baa94efc4fac759568a69bd114f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653414060527569,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bfa0d921a0ce0a55af27a1696709e36,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131,PodSandboxId:65431f984a19ec13097627219fb9430474eb25887961ce21ab24c124cefd3a7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653413999842100,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b99a5d59e524a05b9e2d8501bf6d11,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5378fe314f7a6ccad157d8c7e69480e9a035d341e61e570c71a186f8d71d64,PodSandboxId:3860b04bee19bf7b767c5e11a57b09a688be56266c03bdd875b4842531155254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653127564331448,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09935b18-4576-432d-8b3d-80670c2ef2a4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6d7f4c6d93e53       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   dec01f5a6cb5f       coredns-6f6b679f8f-ffnb7
	15274b39b451a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2d4fdae562320       storage-provisioner
	2f41a4d40a24e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a49d2a2d2ae22       coredns-6f6b679f8f-lwxzl
	6c5b197d3d526       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   9ac14735d0ad7       kube-proxy-48s2x
	badd0b7d7706b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   913147acc93ff       etcd-no-preload-504385
	c171d8f525af6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   05b21dab03397       kube-apiserver-no-preload-504385
	57091fe8c0c73       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   84e4b3e7daacf       kube-controller-manager-no-preload-504385
	3f08497dae4cc       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   65431f984a19e       kube-scheduler-no-preload-504385
	6c5378fe314f7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   3860b04bee19b       kube-apiserver-no-preload-504385
	
	
	==> coredns [2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-504385
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-504385
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=no-preload-504385
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T20_10_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 20:10:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-504385
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 20:19:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 20:15:36 +0000   Fri, 06 Sep 2024 20:10:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 20:15:36 +0000   Fri, 06 Sep 2024 20:10:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 20:15:36 +0000   Fri, 06 Sep 2024 20:10:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 20:15:36 +0000   Fri, 06 Sep 2024 20:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.184
	  Hostname:    no-preload-504385
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9a3178dcb7145c797377936fb22661e
	  System UUID:                e9a3178d-cb71-45c7-9737-7936fb22661e
	  Boot ID:                    28b88cc4-d161-40d9-993e-423f4a032f1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-ffnb7                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-6f6b679f8f-lwxzl                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-no-preload-504385                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-no-preload-504385             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-no-preload-504385    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-48s2x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-no-preload-504385             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-56mkl              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node no-preload-504385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node no-preload-504385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node no-preload-504385 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node no-preload-504385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node no-preload-504385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node no-preload-504385 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node no-preload-504385 event: Registered Node no-preload-504385 in Controller
	
	
	==> dmesg <==
	[  +0.050223] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.230934] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.642640] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep 6 20:05] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.508433] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.060079] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075204] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.193381] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.120241] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.279555] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[ +15.938140] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.063317] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.152837] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +3.933336] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.202050] kauditd_printk_skb: 57 callbacks suppressed
	[  +8.114658] kauditd_printk_skb: 26 callbacks suppressed
	[Sep 6 20:10] systemd-fstab-generator[3065]: Ignoring "noauto" option for root device
	[  +0.067690] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.499660] systemd-fstab-generator[3386]: Ignoring "noauto" option for root device
	[  +0.087356] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.333236] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	[  +0.123882] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.111801] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f] <==
	{"level":"info","ts":"2024-09-06T20:10:14.529053Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ac98865638e77ade","initial-advertise-peer-urls":["https://192.168.61.184:2380"],"listen-peer-urls":["https://192.168.61.184:2380"],"advertise-client-urls":["https://192.168.61.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T20:10:14.529143Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T20:10:14.525222Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.184:2380"}
	{"level":"info","ts":"2024-09-06T20:10:14.529276Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.184:2380"}
	{"level":"info","ts":"2024-09-06T20:10:14.530373Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fd8e42ce2aaa20da","local-member-id":"ac98865638e77ade","added-peer-id":"ac98865638e77ade","added-peer-peer-urls":["https://192.168.61.184:2380"]}
	{"level":"info","ts":"2024-09-06T20:10:15.128729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-06T20:10:15.128872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-06T20:10:15.128982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade received MsgPreVoteResp from ac98865638e77ade at term 1"}
	{"level":"info","ts":"2024-09-06T20:10:15.129051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T20:10:15.129096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade received MsgVoteResp from ac98865638e77ade at term 2"}
	{"level":"info","ts":"2024-09-06T20:10:15.129128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade became leader at term 2"}
	{"level":"info","ts":"2024-09-06T20:10:15.129207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ac98865638e77ade elected leader ac98865638e77ade at term 2"}
	{"level":"info","ts":"2024-09-06T20:10:15.133830Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ac98865638e77ade","local-member-attributes":"{Name:no-preload-504385 ClientURLs:[https://192.168.61.184:2379]}","request-path":"/0/members/ac98865638e77ade/attributes","cluster-id":"fd8e42ce2aaa20da","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T20:10:15.134232Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:10:15.134630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:10:15.135099Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:10:15.135920Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:10:15.140930Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.184:2379"}
	{"level":"info","ts":"2024-09-06T20:10:15.141467Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:10:15.142248Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T20:10:15.145664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T20:10:15.145699Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T20:10:15.146126Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fd8e42ce2aaa20da","local-member-id":"ac98865638e77ade","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:10:15.146241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:10:15.146288Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:19:37 up 14 min,  0 users,  load average: 0.02, 0.16, 0.15
	Linux no-preload-504385 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6c5378fe314f7a6ccad157d8c7e69480e9a035d341e61e570c71a186f8d71d64] <==
	W0906 20:10:07.680379       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.682977       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.687317       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.745308       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.761145       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.781840       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.812755       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.854001       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.858552       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.884550       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.906310       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.014751       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.086990       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.145024       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.151620       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.195820       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.293901       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.395224       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.529315       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.537016       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.552976       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.694069       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.755118       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.756473       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.901050       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0906 20:15:17.835679       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:15:17.835705       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0906 20:15:17.836643       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:15:17.837838       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:16:17.836882       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:16:17.836984       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:16:17.838045       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:16:17.838060       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:16:17.838123       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0906 20:16:17.839260       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:18:17.839277       1 handler_proxy.go:99] no RequestInfo found in the context
	W0906 20:18:17.839692       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:18:17.839764       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0906 20:18:17.839776       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:18:17.841561       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:18:17.841641       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b] <==
	E0906 20:14:23.721517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:14:24.245970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:14:53.729699       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:14:54.256227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:15:23.736233       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:15:24.264971       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:15:36.861401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-504385"
	E0906 20:15:53.743161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:15:54.273336       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:16:10.689977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="406.57µs"
	I0906 20:16:21.688178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="75.593µs"
	E0906 20:16:23.750554       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:16:24.281731       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:16:53.757197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:16:54.290108       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:17:23.764799       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:17:24.298401       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:17:53.770867       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:17:54.306662       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:18:23.779794       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:18:24.316835       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:18:53.786723       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:18:54.324486       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:19:23.793683       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:19:24.332870       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 20:10:25.275365       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 20:10:25.297382       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.184"]
	E0906 20:10:25.297471       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 20:10:25.397537       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 20:10:25.397632       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 20:10:25.397662       1 server_linux.go:169] "Using iptables Proxier"
	I0906 20:10:25.409357       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 20:10:25.409683       1 server.go:483] "Version info" version="v1.31.0"
	I0906 20:10:25.409711       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:10:25.416825       1 config.go:197] "Starting service config controller"
	I0906 20:10:25.416882       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 20:10:25.417347       1 config.go:104] "Starting endpoint slice config controller"
	I0906 20:10:25.417355       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 20:10:25.417935       1 config.go:326] "Starting node config controller"
	I0906 20:10:25.417943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 20:10:25.517171       1 shared_informer.go:320] Caches are synced for service config
	I0906 20:10:25.518345       1 shared_informer.go:320] Caches are synced for node config
	I0906 20:10:25.518373       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131] <==
	W0906 20:10:16.864847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 20:10:16.864952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:16.865141       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 20:10:16.865209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.759264       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 20:10:17.759327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.816305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 20:10:17.816446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.826760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 20:10:17.826930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.837516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:10:17.837641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.860367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 20:10:17.860417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.861355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 20:10:17.861400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.969765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 20:10:17.969818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:18.110784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 20:10:18.110840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:18.123112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:10:18.123166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:18.390987       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 20:10:18.391052       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 20:10:21.253224       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 20:18:25 no-preload-504385 kubelet[3393]: E0906 20:18:25.672167    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:18:29 no-preload-504385 kubelet[3393]: E0906 20:18:29.785776    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653909785116777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:29 no-preload-504385 kubelet[3393]: E0906 20:18:29.785913    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653909785116777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:38 no-preload-504385 kubelet[3393]: E0906 20:18:38.672345    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:18:39 no-preload-504385 kubelet[3393]: E0906 20:18:39.788006    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653919787422425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:39 no-preload-504385 kubelet[3393]: E0906 20:18:39.788049    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653919787422425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:49 no-preload-504385 kubelet[3393]: E0906 20:18:49.790556    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653929789990774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:49 no-preload-504385 kubelet[3393]: E0906 20:18:49.790644    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653929789990774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:52 no-preload-504385 kubelet[3393]: E0906 20:18:52.672438    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:18:59 no-preload-504385 kubelet[3393]: E0906 20:18:59.793338    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653939792933551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:18:59 no-preload-504385 kubelet[3393]: E0906 20:18:59.793650    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653939792933551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:19:06 no-preload-504385 kubelet[3393]: E0906 20:19:06.671915    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:19:09 no-preload-504385 kubelet[3393]: E0906 20:19:09.795772    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653949795247956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:19:09 no-preload-504385 kubelet[3393]: E0906 20:19:09.796123    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653949795247956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:19:19 no-preload-504385 kubelet[3393]: E0906 20:19:19.712832    3393 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 20:19:19 no-preload-504385 kubelet[3393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 20:19:19 no-preload-504385 kubelet[3393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 20:19:19 no-preload-504385 kubelet[3393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 20:19:19 no-preload-504385 kubelet[3393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 20:19:19 no-preload-504385 kubelet[3393]: E0906 20:19:19.798284    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653959797887259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:19:19 no-preload-504385 kubelet[3393]: E0906 20:19:19.798323    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653959797887259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:19:20 no-preload-504385 kubelet[3393]: E0906 20:19:20.672094    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:19:29 no-preload-504385 kubelet[3393]: E0906 20:19:29.800035    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653969799364475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:19:29 no-preload-504385 kubelet[3393]: E0906 20:19:29.800396    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725653969799364475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:19:32 no-preload-504385 kubelet[3393]: E0906 20:19:32.671387    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	
	
	==> storage-provisioner [15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d] <==
	I0906 20:10:26.721647       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 20:10:26.766335       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 20:10:26.766412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 20:10:26.796012       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 20:10:26.797016       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-504385_23ec0dd0-de12-4a78-9abb-d40c60f17bb6!
	I0906 20:10:26.815342       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"689a60c8-594d-47dc-950c-39275506564f", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-504385_23ec0dd0-de12-4a78-9abb-d40c60f17bb6 became leader
	I0906 20:10:26.897747       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-504385_23ec0dd0-de12-4a78-9abb-d40c60f17bb6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-504385 -n no-preload-504385
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-504385 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-56mkl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-504385 describe pod metrics-server-6867b74b74-56mkl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-504385 describe pod metrics-server-6867b74b74-56mkl: exit status 1 (60.410032ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-56mkl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-504385 describe pod metrics-server-6867b74b74-56mkl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:13:20.185535   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:13:34.031438   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:13:58.211542   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:14:16.442596   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:14:23.362194   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:14:49.184834   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:14:57.094251   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:14:58.425566   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:15:21.276996   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:15:46.425969   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:15:50.866429   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:16:21.490286   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:16:44.179081   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:16:57.123089   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:17:13.931087   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:17:52.260908   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:17:53.377847   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:18:34.031350   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:18:58.212148   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:19:23.362105   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:19:49.183989   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:19:58.425004   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:20:50.866948   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:21:44.178486   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:21:57.122406   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 2 (228.21722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-843298" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 2 (224.292973ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-843298 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-843298 logs -n 25: (1.564885109s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-603826 sudo cat                              | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo find                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo crio                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-603826                                       | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-859361 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | disable-driver-mounts-859361                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:57 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-504385             | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-458066            | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653828  | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC | 06 Sep 24 19:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC |                     |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-504385                  | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-458066                 | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-843298        | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653828       | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-843298             | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 20:00:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:00:55.455816   73230 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:00:55.455933   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.455943   73230 out.go:358] Setting ErrFile to fd 2...
	I0906 20:00:55.455951   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.456141   73230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:00:55.456685   73230 out.go:352] Setting JSON to false
	I0906 20:00:55.457698   73230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6204,"bootTime":1725646651,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:00:55.457762   73230 start.go:139] virtualization: kvm guest
	I0906 20:00:55.459863   73230 out.go:177] * [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:00:55.461119   73230 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:00:55.461167   73230 notify.go:220] Checking for updates...
	I0906 20:00:55.463398   73230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:00:55.464573   73230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:00:55.465566   73230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:00:55.466605   73230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:00:55.467834   73230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:00:55.469512   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:00:55.470129   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.470183   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.484881   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0906 20:00:55.485238   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.485752   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.485776   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.486108   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.486296   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.488175   73230 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 20:00:55.489359   73230 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:00:55.489671   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.489705   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.504589   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0906 20:00:55.505047   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.505557   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.505581   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.505867   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.506018   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.541116   73230 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 20:00:55.542402   73230 start.go:297] selected driver: kvm2
	I0906 20:00:55.542423   73230 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-8
43298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.542548   73230 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:00:55.543192   73230 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.543257   73230 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:00:55.558465   73230 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:00:55.558833   73230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:00:55.558865   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:00:55.558875   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:00:55.558908   73230 start.go:340] cluster config:
	{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.559011   73230 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.561521   73230 out.go:177] * Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	I0906 20:00:55.309027   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:58.377096   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:55.562714   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:00:55.562760   73230 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:00:55.562773   73230 cache.go:56] Caching tarball of preloaded images
	I0906 20:00:55.562856   73230 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:00:55.562868   73230 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0906 20:00:55.562977   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:00:55.563173   73230 start.go:360] acquireMachinesLock for old-k8s-version-843298: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:01:04.457122   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:07.529093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:13.609120   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:16.681107   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:22.761164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:25.833123   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:31.913167   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:34.985108   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:41.065140   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:44.137176   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:50.217162   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:53.289137   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:59.369093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:02.441171   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:08.521164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:11.593164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:17.673124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:20.745159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:26.825154   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:29.897211   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:35.977181   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:39.049161   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:45.129172   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:48.201208   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:54.281103   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:57.353175   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:03.433105   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:06.505124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:12.585121   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:15.657169   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:21.737151   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:24.809135   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:30.889180   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:33.961145   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:40.041159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:43.113084   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:46.117237   72441 start.go:364] duration metric: took 4m28.485189545s to acquireMachinesLock for "embed-certs-458066"
	I0906 20:03:46.117298   72441 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:03:46.117309   72441 fix.go:54] fixHost starting: 
	I0906 20:03:46.117737   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:03:46.117773   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:03:46.132573   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0906 20:03:46.133029   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:03:46.133712   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:03:46.133743   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:03:46.134097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:03:46.134322   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:03:46.134505   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:03:46.136291   72441 fix.go:112] recreateIfNeeded on embed-certs-458066: state=Stopped err=<nil>
	I0906 20:03:46.136313   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	W0906 20:03:46.136466   72441 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:03:46.138544   72441 out.go:177] * Restarting existing kvm2 VM for "embed-certs-458066" ...
	I0906 20:03:46.139833   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Start
	I0906 20:03:46.140001   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring networks are active...
	I0906 20:03:46.140754   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network default is active
	I0906 20:03:46.141087   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network mk-embed-certs-458066 is active
	I0906 20:03:46.141402   72441 main.go:141] libmachine: (embed-certs-458066) Getting domain xml...
	I0906 20:03:46.142202   72441 main.go:141] libmachine: (embed-certs-458066) Creating domain...
	I0906 20:03:47.351460   72441 main.go:141] libmachine: (embed-certs-458066) Waiting to get IP...
	I0906 20:03:47.352248   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.352628   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.352699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.352597   73827 retry.go:31] will retry after 202.870091ms: waiting for machine to come up
	I0906 20:03:46.114675   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:03:46.114711   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115092   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:03:46.115118   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115306   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:03:46.117092   72322 machine.go:96] duration metric: took 4m37.429712277s to provisionDockerMachine
	I0906 20:03:46.117135   72322 fix.go:56] duration metric: took 4m37.451419912s for fixHost
	I0906 20:03:46.117144   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 4m37.45145595s
	W0906 20:03:46.117167   72322 start.go:714] error starting host: provision: host is not running
	W0906 20:03:46.117242   72322 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0906 20:03:46.117252   72322 start.go:729] Will try again in 5 seconds ...
	I0906 20:03:47.557228   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.557656   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.557682   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.557606   73827 retry.go:31] will retry after 357.664781ms: waiting for machine to come up
	I0906 20:03:47.917575   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.918041   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.918068   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.918005   73827 retry.go:31] will retry after 338.480268ms: waiting for machine to come up
	I0906 20:03:48.258631   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.259269   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.259305   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.259229   73827 retry.go:31] will retry after 554.173344ms: waiting for machine to come up
	I0906 20:03:48.814947   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.815491   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.815523   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.815449   73827 retry.go:31] will retry after 601.029419ms: waiting for machine to come up
	I0906 20:03:49.418253   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:49.418596   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:49.418623   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:49.418548   73827 retry.go:31] will retry after 656.451458ms: waiting for machine to come up
	I0906 20:03:50.076488   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:50.076908   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:50.076928   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:50.076875   73827 retry.go:31] will retry after 1.13800205s: waiting for machine to come up
	I0906 20:03:51.216380   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:51.216801   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:51.216831   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:51.216758   73827 retry.go:31] will retry after 1.071685673s: waiting for machine to come up
	I0906 20:03:52.289760   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:52.290174   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:52.290202   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:52.290125   73827 retry.go:31] will retry after 1.581761127s: waiting for machine to come up
	I0906 20:03:51.119269   72322 start.go:360] acquireMachinesLock for no-preload-504385: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:03:53.873755   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:53.874150   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:53.874184   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:53.874120   73827 retry.go:31] will retry after 1.99280278s: waiting for machine to come up
	I0906 20:03:55.869267   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:55.869747   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:55.869776   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:55.869685   73827 retry.go:31] will retry after 2.721589526s: waiting for machine to come up
	I0906 20:03:58.594012   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:58.594402   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:58.594428   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:58.594354   73827 retry.go:31] will retry after 2.763858077s: waiting for machine to come up
	I0906 20:04:01.359424   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:01.359775   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:04:01.359809   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:04:01.359736   73827 retry.go:31] will retry after 3.822567166s: waiting for machine to come up
	I0906 20:04:06.669858   72867 start.go:364] duration metric: took 4m9.363403512s to acquireMachinesLock for "default-k8s-diff-port-653828"
	I0906 20:04:06.669929   72867 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:06.669938   72867 fix.go:54] fixHost starting: 
	I0906 20:04:06.670353   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:06.670393   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:06.688290   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44215
	I0906 20:04:06.688752   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:06.689291   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:04:06.689314   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:06.689692   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:06.689886   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:06.690048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:04:06.691557   72867 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653828: state=Stopped err=<nil>
	I0906 20:04:06.691592   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	W0906 20:04:06.691742   72867 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:06.693924   72867 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653828" ...
	I0906 20:04:06.694965   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Start
	I0906 20:04:06.695148   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring networks are active...
	I0906 20:04:06.695900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network default is active
	I0906 20:04:06.696316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network mk-default-k8s-diff-port-653828 is active
	I0906 20:04:06.696698   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Getting domain xml...
	I0906 20:04:06.697469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Creating domain...
	I0906 20:04:05.186782   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187288   72441 main.go:141] libmachine: (embed-certs-458066) Found IP for machine: 192.168.39.118
	I0906 20:04:05.187301   72441 main.go:141] libmachine: (embed-certs-458066) Reserving static IP address...
	I0906 20:04:05.187340   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has current primary IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187764   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.187784   72441 main.go:141] libmachine: (embed-certs-458066) Reserved static IP address: 192.168.39.118
	I0906 20:04:05.187797   72441 main.go:141] libmachine: (embed-certs-458066) DBG | skip adding static IP to network mk-embed-certs-458066 - found existing host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"}
	I0906 20:04:05.187805   72441 main.go:141] libmachine: (embed-certs-458066) Waiting for SSH to be available...
	I0906 20:04:05.187848   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Getting to WaitForSSH function...
	I0906 20:04:05.190229   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190546   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.190576   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190643   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH client type: external
	I0906 20:04:05.190679   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa (-rw-------)
	I0906 20:04:05.190714   72441 main.go:141] libmachine: (embed-certs-458066) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:05.190727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | About to run SSH command:
	I0906 20:04:05.190761   72441 main.go:141] libmachine: (embed-certs-458066) DBG | exit 0
	I0906 20:04:05.317160   72441 main.go:141] libmachine: (embed-certs-458066) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:05.317483   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetConfigRaw
	I0906 20:04:05.318089   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.320559   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.320944   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.320971   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.321225   72441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/config.json ...
	I0906 20:04:05.321445   72441 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:05.321465   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:05.321720   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.323699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.323972   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.324009   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.324126   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.324303   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324444   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324561   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.324706   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.324940   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.324953   72441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:05.437192   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:05.437217   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437479   72441 buildroot.go:166] provisioning hostname "embed-certs-458066"
	I0906 20:04:05.437495   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437665   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.440334   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440705   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.440733   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440925   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.441100   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441260   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441405   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.441573   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.441733   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.441753   72441 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-458066 && echo "embed-certs-458066" | sudo tee /etc/hostname
	I0906 20:04:05.566958   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-458066
	
	I0906 20:04:05.566986   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.569652   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.569984   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.570014   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.570158   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.570342   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570504   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570648   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.570838   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.571042   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.571060   72441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-458066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-458066/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-458066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:05.689822   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:05.689855   72441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:05.689882   72441 buildroot.go:174] setting up certificates
	I0906 20:04:05.689891   72441 provision.go:84] configureAuth start
	I0906 20:04:05.689899   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.690182   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.692758   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693151   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.693172   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693308   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.695364   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.695754   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695909   72441 provision.go:143] copyHostCerts
	I0906 20:04:05.695957   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:05.695975   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:05.696042   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:05.696123   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:05.696130   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:05.696153   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:05.696248   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:05.696257   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:05.696280   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:05.696329   72441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-458066 san=[127.0.0.1 192.168.39.118 embed-certs-458066 localhost minikube]
	I0906 20:04:06.015593   72441 provision.go:177] copyRemoteCerts
	I0906 20:04:06.015656   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:06.015683   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.018244   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018598   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.018630   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018784   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.018990   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.019169   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.019278   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.110170   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 20:04:06.136341   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:06.161181   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:06.184758   72441 provision.go:87] duration metric: took 494.857261ms to configureAuth
	I0906 20:04:06.184786   72441 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:06.184986   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:06.185049   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.187564   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.187955   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.187978   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.188153   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.188399   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188571   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.188920   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.189070   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.189084   72441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:06.425480   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:06.425518   72441 machine.go:96] duration metric: took 1.104058415s to provisionDockerMachine
	I0906 20:04:06.425535   72441 start.go:293] postStartSetup for "embed-certs-458066" (driver="kvm2")
	I0906 20:04:06.425548   72441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:06.425572   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.425893   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:06.425919   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.428471   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428768   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.428794   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428928   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.429109   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.429283   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.429419   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.515180   72441 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:06.519357   72441 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:06.519390   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:06.519464   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:06.519540   72441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:06.519625   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:06.528542   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:06.552463   72441 start.go:296] duration metric: took 126.912829ms for postStartSetup
	I0906 20:04:06.552514   72441 fix.go:56] duration metric: took 20.435203853s for fixHost
	I0906 20:04:06.552540   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.554994   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555521   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.555556   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555739   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.555937   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556095   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556253   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.556409   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.556600   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.556613   72441 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:06.669696   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653046.632932221
	
	I0906 20:04:06.669720   72441 fix.go:216] guest clock: 1725653046.632932221
	I0906 20:04:06.669730   72441 fix.go:229] Guest: 2024-09-06 20:04:06.632932221 +0000 UTC Remote: 2024-09-06 20:04:06.552518521 +0000 UTC m=+289.061134864 (delta=80.4137ms)
	I0906 20:04:06.669761   72441 fix.go:200] guest clock delta is within tolerance: 80.4137ms
	I0906 20:04:06.669769   72441 start.go:83] releasing machines lock for "embed-certs-458066", held for 20.552490687s
	I0906 20:04:06.669801   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.670060   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:06.673015   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673405   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.673433   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673599   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674041   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674210   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674304   72441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:06.674351   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.674414   72441 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:06.674437   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.676916   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677063   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677314   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677341   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677481   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677513   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677686   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677691   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677864   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677878   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678013   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678025   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.678191   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.758176   72441 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:06.782266   72441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:06.935469   72441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:06.941620   72441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:06.941680   72441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:06.957898   72441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:06.957927   72441 start.go:495] detecting cgroup driver to use...
	I0906 20:04:06.957995   72441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:06.978574   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:06.993967   72441 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:06.994035   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:07.008012   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:07.022073   72441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:07.133622   72441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:07.291402   72441 docker.go:233] disabling docker service ...
	I0906 20:04:07.291478   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:07.306422   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:07.321408   72441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:07.442256   72441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:07.564181   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:07.579777   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:07.599294   72441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:07.599361   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.610457   72441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:07.610555   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.621968   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.633527   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.645048   72441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:07.659044   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.670526   72441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.689465   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.701603   72441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:07.712085   72441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:07.712144   72441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:07.728406   72441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:07.739888   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:07.862385   72441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:07.954721   72441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:07.954792   72441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:07.959478   72441 start.go:563] Will wait 60s for crictl version
	I0906 20:04:07.959545   72441 ssh_runner.go:195] Run: which crictl
	I0906 20:04:07.963893   72441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:08.003841   72441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:08.003917   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.032191   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.063563   72441 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:07.961590   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting to get IP...
	I0906 20:04:07.962441   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962859   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962923   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:07.962841   73982 retry.go:31] will retry after 292.508672ms: waiting for machine to come up
	I0906 20:04:08.257346   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257845   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257867   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.257815   73982 retry.go:31] will retry after 265.967606ms: waiting for machine to come up
	I0906 20:04:08.525352   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525907   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.525834   73982 retry.go:31] will retry after 308.991542ms: waiting for machine to come up
	I0906 20:04:08.836444   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837021   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837053   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.836973   73982 retry.go:31] will retry after 483.982276ms: waiting for machine to come up
	I0906 20:04:09.322661   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323161   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323184   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.323125   73982 retry.go:31] will retry after 574.860867ms: waiting for machine to come up
	I0906 20:04:09.899849   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900228   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.900187   73982 retry.go:31] will retry after 769.142372ms: waiting for machine to come up
	I0906 20:04:10.671316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671796   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671853   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:10.671771   73982 retry.go:31] will retry after 720.232224ms: waiting for machine to come up
	I0906 20:04:11.393120   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393502   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393534   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:11.393447   73982 retry.go:31] will retry after 975.812471ms: waiting for machine to come up
	I0906 20:04:08.064907   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:08.067962   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068410   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:08.068442   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068626   72441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:08.072891   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:08.086275   72441 kubeadm.go:883] updating cluster {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:08.086383   72441 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:08.086423   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:08.123100   72441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:08.123158   72441 ssh_runner.go:195] Run: which lz4
	I0906 20:04:08.127330   72441 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:08.131431   72441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:08.131466   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:09.584066   72441 crio.go:462] duration metric: took 1.456765631s to copy over tarball
	I0906 20:04:09.584131   72441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:11.751911   72441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.167751997s)
	I0906 20:04:11.751949   72441 crio.go:469] duration metric: took 2.167848466s to extract the tarball
	I0906 20:04:11.751959   72441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:11.790385   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:11.831973   72441 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:11.831995   72441 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:11.832003   72441 kubeadm.go:934] updating node { 192.168.39.118 8443 v1.31.0 crio true true} ...
	I0906 20:04:11.832107   72441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-458066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:11.832166   72441 ssh_runner.go:195] Run: crio config
	I0906 20:04:11.881946   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:11.881973   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:11.882000   72441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:11.882028   72441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-458066 NodeName:embed-certs-458066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:11.882186   72441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-458066"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:11.882266   72441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:11.892537   72441 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:11.892617   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:11.902278   72441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0906 20:04:11.920451   72441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:11.938153   72441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0906 20:04:11.957510   72441 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:11.961364   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:11.973944   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:12.109677   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:12.126348   72441 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066 for IP: 192.168.39.118
	I0906 20:04:12.126378   72441 certs.go:194] generating shared ca certs ...
	I0906 20:04:12.126399   72441 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:12.126562   72441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:12.126628   72441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:12.126642   72441 certs.go:256] generating profile certs ...
	I0906 20:04:12.126751   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/client.key
	I0906 20:04:12.126843   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key.c10a03b1
	I0906 20:04:12.126904   72441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key
	I0906 20:04:12.127063   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:12.127111   72441 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:12.127123   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:12.127153   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:12.127189   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:12.127218   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:12.127268   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:12.128117   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:12.185978   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:12.218124   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:12.254546   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:12.290098   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0906 20:04:12.317923   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:12.341186   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:12.363961   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:04:12.388000   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:12.418618   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:12.442213   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:12.465894   72441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:12.482404   72441 ssh_runner.go:195] Run: openssl version
	I0906 20:04:12.488370   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:12.499952   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504565   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504619   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.510625   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:12.522202   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:12.370306   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370743   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370779   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:12.370688   73982 retry.go:31] will retry after 1.559820467s: waiting for machine to come up
	I0906 20:04:13.932455   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933042   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933072   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:13.932985   73982 retry.go:31] will retry after 1.968766852s: waiting for machine to come up
	I0906 20:04:15.903304   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903826   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903855   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:15.903775   73982 retry.go:31] will retry after 2.738478611s: waiting for machine to come up
	I0906 20:04:12.533501   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538229   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538284   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.544065   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:12.555220   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:12.566402   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571038   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571093   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.577057   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:12.588056   72441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:12.592538   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:12.598591   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:12.604398   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:12.610502   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:12.616513   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:12.622859   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:12.628975   72441 kubeadm.go:392] StartCluster: {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:12.629103   72441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:12.629154   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.667699   72441 cri.go:89] found id: ""
	I0906 20:04:12.667764   72441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:12.678070   72441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:12.678092   72441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:12.678148   72441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:12.687906   72441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:12.688889   72441 kubeconfig.go:125] found "embed-certs-458066" server: "https://192.168.39.118:8443"
	I0906 20:04:12.690658   72441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:12.700591   72441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.118
	I0906 20:04:12.700623   72441 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:12.700635   72441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:12.700675   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.741471   72441 cri.go:89] found id: ""
	I0906 20:04:12.741553   72441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:12.757877   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:12.767729   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:12.767748   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:12.767800   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:12.777094   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:12.777157   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:12.786356   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:12.795414   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:12.795470   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:12.804727   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.813481   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:12.813534   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.822844   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:12.831877   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:12.831930   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:12.841082   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:12.850560   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:12.975888   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:13.850754   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.064392   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.140680   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.239317   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:14.239411   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:14.740313   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.240388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.740388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.755429   72441 api_server.go:72] duration metric: took 1.516111342s to wait for apiserver process to appear ...
	I0906 20:04:15.755462   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:15.755483   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.544772   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.544807   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.544824   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.596487   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.596546   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.755752   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.761917   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:18.761946   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.256512   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.265937   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.265973   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.756568   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.763581   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.763606   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:20.256237   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:20.262036   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:04:20.268339   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:20.268364   72441 api_server.go:131] duration metric: took 4.512894792s to wait for apiserver health ...
	I0906 20:04:20.268372   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:20.268378   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:20.270262   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:18.644597   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645056   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645088   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:18.644992   73982 retry.go:31] will retry after 2.982517528s: waiting for machine to come up
	I0906 20:04:21.631028   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631392   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631414   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:21.631367   73982 retry.go:31] will retry after 3.639469531s: waiting for machine to come up
	I0906 20:04:20.271474   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:20.282996   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:20.303957   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:20.315560   72441 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:20.315602   72441 system_pods.go:61] "coredns-6f6b679f8f-v6z7z" [b2c18dba-1210-4e95-a705-95abceca92f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:20.315611   72441 system_pods.go:61] "etcd-embed-certs-458066" [cf60e7c7-1801-42c7-be25-85242c22a5d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:20.315619   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [48c684ec-f93f-49ec-868b-6e7bc20ad506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:20.315625   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [1d55b520-2d8f-4517-a491-8193eaff5d89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:20.315631   72441 system_pods.go:61] "kube-proxy-crvq7" [f0610684-81ee-426a-adc2-aea80faab822] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:20.315639   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [d8744325-58f2-43a8-9a93-516b5a6fb989] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:20.315644   72441 system_pods.go:61] "metrics-server-6867b74b74-gtg94" [600e9c90-20db-407e-b586-fae3809d87b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:20.315649   72441 system_pods.go:61] "storage-provisioner" [1efe7188-2d33-4a29-afbe-823adbef73b3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:20.315657   72441 system_pods.go:74] duration metric: took 11.674655ms to wait for pod list to return data ...
	I0906 20:04:20.315665   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:20.318987   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:20.319012   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:20.319023   72441 node_conditions.go:105] duration metric: took 3.354197ms to run NodePressure ...
	I0906 20:04:20.319038   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:20.600925   72441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607562   72441 kubeadm.go:739] kubelet initialised
	I0906 20:04:20.607590   72441 kubeadm.go:740] duration metric: took 6.637719ms waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607602   72441 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:20.611592   72441 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:26.558023   73230 start.go:364] duration metric: took 3m30.994815351s to acquireMachinesLock for "old-k8s-version-843298"
	I0906 20:04:26.558087   73230 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:26.558096   73230 fix.go:54] fixHost starting: 
	I0906 20:04:26.558491   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:26.558542   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:26.576511   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0906 20:04:26.576933   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:26.577434   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:04:26.577460   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:26.577794   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:26.577968   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:26.578128   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetState
	I0906 20:04:26.579640   73230 fix.go:112] recreateIfNeeded on old-k8s-version-843298: state=Stopped err=<nil>
	I0906 20:04:26.579674   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	W0906 20:04:26.579829   73230 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:26.581843   73230 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-843298" ...
	I0906 20:04:25.275406   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275902   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Found IP for machine: 192.168.50.16
	I0906 20:04:25.275942   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has current primary IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275955   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserving static IP address...
	I0906 20:04:25.276431   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.276463   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserved static IP address: 192.168.50.16
	I0906 20:04:25.276482   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | skip adding static IP to network mk-default-k8s-diff-port-653828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"}
	I0906 20:04:25.276493   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for SSH to be available...
	I0906 20:04:25.276512   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Getting to WaitForSSH function...
	I0906 20:04:25.278727   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279006   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.279037   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279196   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH client type: external
	I0906 20:04:25.279234   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa (-rw-------)
	I0906 20:04:25.279289   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:25.279312   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | About to run SSH command:
	I0906 20:04:25.279330   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | exit 0
	I0906 20:04:25.405134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:25.405524   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetConfigRaw
	I0906 20:04:25.406134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.408667   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409044   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.409074   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409332   72867 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/config.json ...
	I0906 20:04:25.409513   72867 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:25.409530   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:25.409724   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.411737   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412027   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.412060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412171   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.412362   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412489   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412662   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.412802   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.413045   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.413059   72867 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:25.513313   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:25.513343   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513613   72867 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653828"
	I0906 20:04:25.513644   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513851   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.516515   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.516847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.516895   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.517116   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.517300   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517461   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517574   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.517712   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.517891   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.517905   72867 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653828 && echo "default-k8s-diff-port-653828" | sudo tee /etc/hostname
	I0906 20:04:25.637660   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653828
	
	I0906 20:04:25.637691   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.640258   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640600   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.640626   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640811   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.641001   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641177   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641333   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.641524   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.641732   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.641754   72867 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:25.749746   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:25.749773   72867 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:25.749795   72867 buildroot.go:174] setting up certificates
	I0906 20:04:25.749812   72867 provision.go:84] configureAuth start
	I0906 20:04:25.749828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.750111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.752528   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.752893   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.752920   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.753104   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.755350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755642   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.755666   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755808   72867 provision.go:143] copyHostCerts
	I0906 20:04:25.755858   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:25.755875   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:25.755930   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:25.756017   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:25.756024   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:25.756046   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:25.756129   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:25.756137   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:25.756155   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:25.756212   72867 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653828 san=[127.0.0.1 192.168.50.16 default-k8s-diff-port-653828 localhost minikube]
	I0906 20:04:25.934931   72867 provision.go:177] copyRemoteCerts
	I0906 20:04:25.935018   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:25.935060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.937539   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.937899   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.937925   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.938111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.938308   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.938469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.938644   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.019666   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:26.043989   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0906 20:04:26.066845   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:04:26.090526   72867 provision.go:87] duration metric: took 340.698646ms to configureAuth
	I0906 20:04:26.090561   72867 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:26.090786   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:26.090878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.093783   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094167   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.094201   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094503   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.094689   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094850   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094975   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.095130   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.095357   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.095389   72867 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:26.324270   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:26.324301   72867 machine.go:96] duration metric: took 914.775498ms to provisionDockerMachine
	I0906 20:04:26.324315   72867 start.go:293] postStartSetup for "default-k8s-diff-port-653828" (driver="kvm2")
	I0906 20:04:26.324328   72867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:26.324350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.324726   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:26.324759   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.327339   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327718   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.327750   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.328147   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.328309   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.328449   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.408475   72867 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:26.413005   72867 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:26.413033   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:26.413107   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:26.413203   72867 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:26.413320   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:26.422811   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:26.449737   72867 start.go:296] duration metric: took 125.408167ms for postStartSetup
	I0906 20:04:26.449772   72867 fix.go:56] duration metric: took 19.779834553s for fixHost
	I0906 20:04:26.449792   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.452589   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.452990   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.453022   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.453323   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.453529   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453710   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.453966   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.454125   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.454136   72867 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:26.557844   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653066.531604649
	
	I0906 20:04:26.557875   72867 fix.go:216] guest clock: 1725653066.531604649
	I0906 20:04:26.557884   72867 fix.go:229] Guest: 2024-09-06 20:04:26.531604649 +0000 UTC Remote: 2024-09-06 20:04:26.449775454 +0000 UTC m=+269.281822801 (delta=81.829195ms)
	I0906 20:04:26.557904   72867 fix.go:200] guest clock delta is within tolerance: 81.829195ms
	I0906 20:04:26.557909   72867 start.go:83] releasing machines lock for "default-k8s-diff-port-653828", held for 19.888002519s
	I0906 20:04:26.557943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.558256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:26.561285   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561705   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.561732   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562425   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562628   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562732   72867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:26.562782   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.562920   72867 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:26.562950   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.565587   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.565970   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566149   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566331   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.566542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.566605   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566633   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566744   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.566756   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566992   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.567145   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.567302   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.672529   72867 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:26.678762   72867 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:26.825625   72867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:26.832290   72867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:26.832363   72867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:26.848802   72867 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:26.848824   72867 start.go:495] detecting cgroup driver to use...
	I0906 20:04:26.848917   72867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:26.864986   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:26.878760   72867 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:26.878813   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:26.893329   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:26.909090   72867 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:27.025534   72867 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:27.190190   72867 docker.go:233] disabling docker service ...
	I0906 20:04:27.190293   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:22.617468   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:24.618561   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.118448   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.204700   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:27.217880   72867 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:27.346599   72867 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:27.466601   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:27.480785   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:27.501461   72867 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:27.501523   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.511815   72867 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:27.511868   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.521806   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.532236   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.542227   72867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:27.552389   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.563462   72867 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.583365   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.594465   72867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:27.605074   72867 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:27.605140   72867 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:27.618702   72867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:27.630566   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:27.748387   72867 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:27.841568   72867 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:27.841652   72867 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:27.846880   72867 start.go:563] Will wait 60s for crictl version
	I0906 20:04:27.846936   72867 ssh_runner.go:195] Run: which crictl
	I0906 20:04:27.851177   72867 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:27.895225   72867 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:27.895327   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.934388   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.966933   72867 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:26.583194   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .Start
	I0906 20:04:26.583341   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring networks are active...
	I0906 20:04:26.584046   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network default is active
	I0906 20:04:26.584420   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network mk-old-k8s-version-843298 is active
	I0906 20:04:26.584851   73230 main.go:141] libmachine: (old-k8s-version-843298) Getting domain xml...
	I0906 20:04:26.585528   73230 main.go:141] libmachine: (old-k8s-version-843298) Creating domain...
	I0906 20:04:27.874281   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting to get IP...
	I0906 20:04:27.875189   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:27.875762   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:27.875844   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:27.875754   74166 retry.go:31] will retry after 289.364241ms: waiting for machine to come up
	I0906 20:04:28.166932   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.167349   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.167375   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.167303   74166 retry.go:31] will retry after 317.106382ms: waiting for machine to come up
	I0906 20:04:28.485664   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.486147   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.486241   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.486199   74166 retry.go:31] will retry after 401.712201ms: waiting for machine to come up
	I0906 20:04:28.890039   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.890594   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.890621   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.890540   74166 retry.go:31] will retry after 570.418407ms: waiting for machine to come up
	I0906 20:04:29.462983   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:29.463463   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:29.463489   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:29.463428   74166 retry.go:31] will retry after 696.361729ms: waiting for machine to come up
	I0906 20:04:30.161305   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:30.161829   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:30.161876   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:30.161793   74166 retry.go:31] will retry after 896.800385ms: waiting for machine to come up
	I0906 20:04:27.968123   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:27.971448   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.971880   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:27.971904   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.972128   72867 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:27.981160   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:27.994443   72867 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653
828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:27.994575   72867 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:27.994635   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:28.043203   72867 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:28.043285   72867 ssh_runner.go:195] Run: which lz4
	I0906 20:04:28.048798   72867 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:28.053544   72867 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:28.053577   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:29.490070   72867 crio.go:462] duration metric: took 1.441303819s to copy over tarball
	I0906 20:04:29.490142   72867 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:31.649831   72867 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159650072s)
	I0906 20:04:31.649870   72867 crio.go:469] duration metric: took 2.159772826s to extract the tarball
	I0906 20:04:31.649880   72867 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:31.686875   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:31.729557   72867 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:31.729580   72867 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:31.729587   72867 kubeadm.go:934] updating node { 192.168.50.16 8444 v1.31.0 crio true true} ...
	I0906 20:04:31.729698   72867 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:31.729799   72867 ssh_runner.go:195] Run: crio config
	I0906 20:04:31.777272   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:31.777299   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:31.777316   72867 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:31.777336   72867 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.16 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653828 NodeName:default-k8s-diff-port-653828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:31.777509   72867 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.16
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653828"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:31.777577   72867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:31.788008   72867 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:31.788070   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:31.798261   72867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0906 20:04:31.815589   72867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:31.832546   72867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0906 20:04:31.849489   72867 ssh_runner.go:195] Run: grep 192.168.50.16	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:31.853452   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:31.866273   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:31.984175   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:32.001110   72867 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828 for IP: 192.168.50.16
	I0906 20:04:32.001139   72867 certs.go:194] generating shared ca certs ...
	I0906 20:04:32.001160   72867 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:32.001343   72867 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:32.001399   72867 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:32.001413   72867 certs.go:256] generating profile certs ...
	I0906 20:04:32.001509   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/client.key
	I0906 20:04:32.001613   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key.01951d83
	I0906 20:04:32.001665   72867 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key
	I0906 20:04:32.001815   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:32.001866   72867 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:32.001880   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:32.001913   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:32.001933   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:32.001962   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:32.002001   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:32.002812   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:32.037177   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:32.078228   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:32.117445   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:32.153039   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 20:04:32.186458   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:28.120786   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:28.120826   72441 pod_ready.go:82] duration metric: took 7.509209061s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:28.120842   72441 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:30.129518   72441 pod_ready.go:103] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:31.059799   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.060272   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.060294   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.060226   74166 retry.go:31] will retry after 841.627974ms: waiting for machine to come up
	I0906 20:04:31.903823   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.904258   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.904280   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.904238   74166 retry.go:31] will retry after 1.274018797s: waiting for machine to come up
	I0906 20:04:33.179723   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:33.180090   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:33.180133   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:33.180059   74166 retry.go:31] will retry after 1.496142841s: waiting for machine to come up
	I0906 20:04:34.678209   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:34.678697   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:34.678726   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:34.678652   74166 retry.go:31] will retry after 1.795101089s: waiting for machine to come up
	I0906 20:04:32.216815   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:32.245378   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:32.272163   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:32.297017   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:32.321514   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:32.345724   72867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:32.362488   72867 ssh_runner.go:195] Run: openssl version
	I0906 20:04:32.368722   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:32.380099   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384777   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384834   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.392843   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:32.405716   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:32.417043   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422074   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422143   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.427946   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:32.439430   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:32.450466   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455056   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455114   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.460970   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:32.471978   72867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:32.476838   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:32.483008   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:32.489685   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:32.496446   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:32.502841   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:32.509269   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:32.515687   72867 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:32.515791   72867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:32.515853   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.567687   72867 cri.go:89] found id: ""
	I0906 20:04:32.567763   72867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:32.578534   72867 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:32.578552   72867 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:32.578598   72867 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:32.588700   72867 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:32.589697   72867 kubeconfig.go:125] found "default-k8s-diff-port-653828" server: "https://192.168.50.16:8444"
	I0906 20:04:32.591739   72867 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:32.601619   72867 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.16
	I0906 20:04:32.601649   72867 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:32.601659   72867 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:32.601724   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.640989   72867 cri.go:89] found id: ""
	I0906 20:04:32.641056   72867 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:32.659816   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:32.670238   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:32.670274   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:32.670327   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:04:32.679687   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:32.679778   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:32.689024   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:04:32.698403   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:32.698465   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:32.707806   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.717015   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:32.717105   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.726408   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:04:32.735461   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:32.735538   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:32.744701   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:32.754202   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:32.874616   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.759668   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.984693   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.051998   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.155274   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:34.155384   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:34.655749   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.156069   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.656120   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.672043   72867 api_server.go:72] duration metric: took 1.516769391s to wait for apiserver process to appear ...
	I0906 20:04:35.672076   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:35.672099   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:32.628208   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.628235   72441 pod_ready.go:82] duration metric: took 4.507383414s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.628248   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633941   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.633965   72441 pod_ready.go:82] duration metric: took 5.709738ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633975   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639227   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.639249   72441 pod_ready.go:82] duration metric: took 5.26842ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639259   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644664   72441 pod_ready.go:93] pod "kube-proxy-crvq7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.644690   72441 pod_ready.go:82] duration metric: took 5.423551ms for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644701   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650000   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.650022   72441 pod_ready.go:82] duration metric: took 5.312224ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650034   72441 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:34.657709   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:37.157744   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:38.092386   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.092429   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.092448   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.129071   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.129110   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.172277   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.213527   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.213573   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:38.673103   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.677672   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.677704   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.172237   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.179638   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:39.179670   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.672801   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.678523   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:04:39.688760   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:39.688793   72867 api_server.go:131] duration metric: took 4.016709147s to wait for apiserver health ...
	I0906 20:04:39.688804   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:39.688812   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:39.690721   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:36.474937   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:36.475399   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:36.475497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:36.475351   74166 retry.go:31] will retry after 1.918728827s: waiting for machine to come up
	I0906 20:04:38.397024   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:38.397588   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:38.397617   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:38.397534   74166 retry.go:31] will retry after 3.460427722s: waiting for machine to come up
	I0906 20:04:39.692055   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:39.707875   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:39.728797   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:39.740514   72867 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:39.740553   72867 system_pods.go:61] "coredns-6f6b679f8f-mvwth" [53675f76-d849-471c-9cd1-561e2f8e6499] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:39.740562   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [f69c9488-87d4-487e-902b-588182c2e2e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:39.740567   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [d641f983-776e-4102-81a3-ba3cf49911a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:39.740579   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [1b09e88d-b038-42d3-9c36-4eee1eff1c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:39.740585   72867 system_pods.go:61] "kube-proxy-9wlq4" [5254a977-ded3-439d-8db0-cd54ccd96940] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:39.740590   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [f8c16cf5-2c76-428f-83de-e79c49566683] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:39.740594   72867 system_pods.go:61] "metrics-server-6867b74b74-dds56" [6219eb1e-2904-487c-b4ed-d786a0627281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:39.740598   72867 system_pods.go:61] "storage-provisioner" [58dd82cd-e250-4f57-97ad-55408f001cc3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:39.740605   72867 system_pods.go:74] duration metric: took 11.784722ms to wait for pod list to return data ...
	I0906 20:04:39.740614   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:39.745883   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:39.745913   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:39.745923   72867 node_conditions.go:105] duration metric: took 5.304169ms to run NodePressure ...
	I0906 20:04:39.745945   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:40.031444   72867 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036537   72867 kubeadm.go:739] kubelet initialised
	I0906 20:04:40.036556   72867 kubeadm.go:740] duration metric: took 5.087185ms waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036563   72867 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:40.044926   72867 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:42.050947   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:39.657641   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:42.156327   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:41.860109   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:41.860612   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:41.860640   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:41.860560   74166 retry.go:31] will retry after 4.509018672s: waiting for machine to come up
	I0906 20:04:44.051148   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.554068   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:44.157427   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.656559   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:47.793833   72322 start.go:364] duration metric: took 56.674519436s to acquireMachinesLock for "no-preload-504385"
	I0906 20:04:47.793890   72322 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:47.793898   72322 fix.go:54] fixHost starting: 
	I0906 20:04:47.794329   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:47.794363   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:47.812048   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0906 20:04:47.812496   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:47.813081   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:04:47.813109   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:47.813446   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:47.813741   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:04:47.813945   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:04:47.815314   72322 fix.go:112] recreateIfNeeded on no-preload-504385: state=Stopped err=<nil>
	I0906 20:04:47.815338   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	W0906 20:04:47.815507   72322 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:47.817424   72322 out.go:177] * Restarting existing kvm2 VM for "no-preload-504385" ...
	I0906 20:04:47.818600   72322 main.go:141] libmachine: (no-preload-504385) Calling .Start
	I0906 20:04:47.818760   72322 main.go:141] libmachine: (no-preload-504385) Ensuring networks are active...
	I0906 20:04:47.819569   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network default is active
	I0906 20:04:47.819883   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network mk-no-preload-504385 is active
	I0906 20:04:47.820233   72322 main.go:141] libmachine: (no-preload-504385) Getting domain xml...
	I0906 20:04:47.821002   72322 main.go:141] libmachine: (no-preload-504385) Creating domain...
	I0906 20:04:46.374128   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374599   73230 main.go:141] libmachine: (old-k8s-version-843298) Found IP for machine: 192.168.72.30
	I0906 20:04:46.374629   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has current primary IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374642   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserving static IP address...
	I0906 20:04:46.375045   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.375071   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | skip adding static IP to network mk-old-k8s-version-843298 - found existing host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"}
	I0906 20:04:46.375081   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserved static IP address: 192.168.72.30
	I0906 20:04:46.375104   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting for SSH to be available...
	I0906 20:04:46.375119   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Getting to WaitForSSH function...
	I0906 20:04:46.377497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377836   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.377883   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377956   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH client type: external
	I0906 20:04:46.377982   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa (-rw-------)
	I0906 20:04:46.378028   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:46.378044   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | About to run SSH command:
	I0906 20:04:46.378054   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | exit 0
	I0906 20:04:46.505025   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:46.505386   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 20:04:46.506031   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.508401   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.508787   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.508827   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.509092   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:04:46.509321   73230 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:46.509339   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:46.509549   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.511816   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512230   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.512265   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512436   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.512618   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512794   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512932   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.513123   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.513364   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.513378   73230 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:46.629437   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:46.629469   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629712   73230 buildroot.go:166] provisioning hostname "old-k8s-version-843298"
	I0906 20:04:46.629731   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629910   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.632226   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632620   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.632653   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632817   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.633009   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633204   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633364   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.633544   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.633758   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.633779   73230 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-843298 && echo "old-k8s-version-843298" | sudo tee /etc/hostname
	I0906 20:04:46.764241   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-843298
	
	I0906 20:04:46.764271   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.766678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767063   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.767092   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767236   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.767414   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767591   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767740   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.767874   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.768069   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.768088   73230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-843298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-843298/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-843298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:46.890399   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:46.890424   73230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:46.890461   73230 buildroot.go:174] setting up certificates
	I0906 20:04:46.890471   73230 provision.go:84] configureAuth start
	I0906 20:04:46.890479   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.890714   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.893391   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893765   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.893802   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893942   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.896173   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896505   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.896524   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896688   73230 provision.go:143] copyHostCerts
	I0906 20:04:46.896741   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:46.896756   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:46.896814   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:46.896967   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:46.896977   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:46.897008   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:46.897096   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:46.897104   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:46.897133   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:46.897193   73230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-843298 san=[127.0.0.1 192.168.72.30 localhost minikube old-k8s-version-843298]
	I0906 20:04:47.128570   73230 provision.go:177] copyRemoteCerts
	I0906 20:04:47.128627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:47.128653   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.131548   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.131952   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.131981   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.132164   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.132396   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.132571   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.132705   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.223745   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:47.249671   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0906 20:04:47.274918   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:47.300351   73230 provision.go:87] duration metric: took 409.869395ms to configureAuth
	I0906 20:04:47.300376   73230 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:47.300584   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:04:47.300673   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.303255   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303559   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.303581   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303739   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.303943   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304098   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304266   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.304407   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.304623   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.304644   73230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:47.539793   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:47.539824   73230 machine.go:96] duration metric: took 1.030489839s to provisionDockerMachine
	I0906 20:04:47.539836   73230 start.go:293] postStartSetup for "old-k8s-version-843298" (driver="kvm2")
	I0906 20:04:47.539849   73230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:47.539884   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.540193   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:47.540220   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.543190   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543482   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.543506   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543707   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.543938   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.544097   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.544243   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.633100   73230 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:47.637336   73230 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:47.637368   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:47.637459   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:47.637541   73230 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:47.637627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:47.648442   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:47.672907   73230 start.go:296] duration metric: took 133.055727ms for postStartSetup
	I0906 20:04:47.672951   73230 fix.go:56] duration metric: took 21.114855209s for fixHost
	I0906 20:04:47.672978   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.675459   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.675833   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.675863   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.676005   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.676303   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676471   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676661   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.676846   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.677056   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.677070   73230 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:47.793647   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653087.750926682
	
	I0906 20:04:47.793671   73230 fix.go:216] guest clock: 1725653087.750926682
	I0906 20:04:47.793681   73230 fix.go:229] Guest: 2024-09-06 20:04:47.750926682 +0000 UTC Remote: 2024-09-06 20:04:47.67295613 +0000 UTC m=+232.250384025 (delta=77.970552ms)
	I0906 20:04:47.793735   73230 fix.go:200] guest clock delta is within tolerance: 77.970552ms
	I0906 20:04:47.793746   73230 start.go:83] releasing machines lock for "old-k8s-version-843298", held for 21.235682628s
	I0906 20:04:47.793778   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.794059   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:47.796792   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797195   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.797229   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797425   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798019   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798230   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798314   73230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:47.798360   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.798488   73230 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:47.798509   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.801253   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801632   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.801658   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801867   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802060   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802122   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.802152   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.802210   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802318   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802460   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802504   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.802580   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802722   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.886458   73230 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:47.910204   73230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:48.055661   73230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:48.063024   73230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:48.063090   73230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:48.084749   73230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:48.084771   73230 start.go:495] detecting cgroup driver to use...
	I0906 20:04:48.084892   73230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:48.105494   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:48.123487   73230 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:48.123564   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:48.145077   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:48.161336   73230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:48.283568   73230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:48.445075   73230 docker.go:233] disabling docker service ...
	I0906 20:04:48.445146   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:48.461122   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:48.475713   73230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:48.632804   73230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:48.762550   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:48.778737   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:48.798465   73230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:04:48.798549   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.811449   73230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:48.811523   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.824192   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.835598   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.847396   73230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:48.860005   73230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:48.871802   73230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:48.871864   73230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:48.887596   73230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:48.899508   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:49.041924   73230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:49.144785   73230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:49.144885   73230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:49.150404   73230 start.go:563] Will wait 60s for crictl version
	I0906 20:04:49.150461   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:49.154726   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:49.202450   73230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:49.202557   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.235790   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.270094   73230 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0906 20:04:49.271457   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:49.274710   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275114   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:49.275139   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275475   73230 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:49.280437   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:49.293664   73230 kubeadm.go:883] updating cluster {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:49.293793   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:04:49.293842   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:49.348172   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:49.348251   73230 ssh_runner.go:195] Run: which lz4
	I0906 20:04:49.352703   73230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:49.357463   73230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:49.357501   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0906 20:04:49.056116   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:51.553185   72867 pod_ready.go:93] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.553217   72867 pod_ready.go:82] duration metric: took 11.508264695s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.553231   72867 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563758   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.563788   72867 pod_ready.go:82] duration metric: took 10.547437ms for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563802   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570906   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.570940   72867 pod_ready.go:82] duration metric: took 7.128595ms for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570957   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:48.657527   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:50.662561   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:49.146755   72322 main.go:141] libmachine: (no-preload-504385) Waiting to get IP...
	I0906 20:04:49.147780   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.148331   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.148406   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.148309   74322 retry.go:31] will retry after 250.314453ms: waiting for machine to come up
	I0906 20:04:49.399920   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.400386   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.400468   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.400345   74322 retry.go:31] will retry after 247.263156ms: waiting for machine to come up
	I0906 20:04:49.648894   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.649420   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.649445   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.649376   74322 retry.go:31] will retry after 391.564663ms: waiting for machine to come up
	I0906 20:04:50.043107   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.043594   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.043617   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.043548   74322 retry.go:31] will retry after 513.924674ms: waiting for machine to come up
	I0906 20:04:50.559145   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.559637   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.559675   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.559543   74322 retry.go:31] will retry after 551.166456ms: waiting for machine to come up
	I0906 20:04:51.111906   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.112967   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.112999   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.112921   74322 retry.go:31] will retry after 653.982425ms: waiting for machine to come up
	I0906 20:04:51.768950   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.769466   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.769496   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.769419   74322 retry.go:31] will retry after 935.670438ms: waiting for machine to come up
	I0906 20:04:52.706493   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:52.707121   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:52.707152   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:52.707062   74322 retry.go:31] will retry after 1.141487289s: waiting for machine to come up
	I0906 20:04:51.190323   73230 crio.go:462] duration metric: took 1.837657617s to copy over tarball
	I0906 20:04:51.190410   73230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:54.320754   73230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.130319477s)
	I0906 20:04:54.320778   73230 crio.go:469] duration metric: took 3.130424981s to extract the tarball
	I0906 20:04:54.320785   73230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:54.388660   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:54.427475   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:54.427505   73230 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:04:54.427580   73230 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.427594   73230 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.427611   73230 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.427662   73230 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.427691   73230 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.427696   73230 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.427813   73230 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.427672   73230 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0906 20:04:54.429432   73230 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.429443   73230 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.429447   73230 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.429448   73230 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.429475   73230 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.429449   73230 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.429496   73230 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.429589   73230 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 20:04:54.603502   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.607745   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.610516   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.613580   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.616591   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.622381   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 20:04:54.636746   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.690207   73230 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0906 20:04:54.690254   73230 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.690306   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.788758   73230 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0906 20:04:54.788804   73230 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.788876   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.804173   73230 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0906 20:04:54.804228   73230 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.804273   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817005   73230 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0906 20:04:54.817056   73230 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.817074   73230 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0906 20:04:54.817101   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817122   73230 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.817138   73230 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 20:04:54.817167   73230 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 20:04:54.817202   73230 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0906 20:04:54.817213   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817220   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.817227   73230 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.817168   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817253   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817301   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.817333   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902264   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.902422   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902522   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.902569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.902602   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.902654   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:54.902708   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.061686   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.073933   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.085364   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:55.085463   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.085399   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.085610   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:55.085725   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.192872   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:55.196085   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.255204   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.288569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.291461   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0906 20:04:55.291541   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.291559   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0906 20:04:55.291726   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0906 20:04:53.578469   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.578504   72867 pod_ready.go:82] duration metric: took 2.007539423s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.578534   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583560   72867 pod_ready.go:93] pod "kube-proxy-9wlq4" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.583583   72867 pod_ready.go:82] duration metric: took 5.037068ms for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583594   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832422   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:54.832453   72867 pod_ready.go:82] duration metric: took 1.248849975s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832480   72867 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:56.840031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.156842   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:55.236051   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.849822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:53.850213   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:53.850235   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:53.850178   74322 retry.go:31] will retry after 1.858736556s: waiting for machine to come up
	I0906 20:04:55.710052   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:55.710550   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:55.710598   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:55.710496   74322 retry.go:31] will retry after 2.033556628s: waiting for machine to come up
	I0906 20:04:57.745989   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:57.746433   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:57.746459   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:57.746388   74322 retry.go:31] will retry after 1.985648261s: waiting for machine to come up
	I0906 20:04:55.500590   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0906 20:04:55.500702   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 20:04:55.500740   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0906 20:04:55.500824   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0906 20:04:55.500885   73230 cache_images.go:92] duration metric: took 1.07336017s to LoadCachedImages
	W0906 20:04:55.500953   73230 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0906 20:04:55.500969   73230 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.20.0 crio true true} ...
	I0906 20:04:55.501112   73230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-843298 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:55.501192   73230 ssh_runner.go:195] Run: crio config
	I0906 20:04:55.554097   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:04:55.554119   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:55.554135   73230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:55.554154   73230 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-843298 NodeName:old-k8s-version-843298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 20:04:55.554359   73230 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-843298"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:55.554441   73230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0906 20:04:55.565923   73230 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:55.566004   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:55.577366   73230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0906 20:04:55.595470   73230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:55.614641   73230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0906 20:04:55.637739   73230 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:55.642233   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:55.658409   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:55.804327   73230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:55.824288   73230 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298 for IP: 192.168.72.30
	I0906 20:04:55.824308   73230 certs.go:194] generating shared ca certs ...
	I0906 20:04:55.824323   73230 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:55.824479   73230 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:55.824541   73230 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:55.824560   73230 certs.go:256] generating profile certs ...
	I0906 20:04:55.824680   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key
	I0906 20:04:55.824755   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3
	I0906 20:04:55.824799   73230 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key
	I0906 20:04:55.824952   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:55.824995   73230 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:55.825008   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:55.825041   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:55.825072   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:55.825102   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:55.825158   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:55.825878   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:55.868796   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:55.905185   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:55.935398   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:55.973373   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 20:04:56.008496   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:04:56.046017   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:56.080049   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:56.122717   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:56.151287   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:56.184273   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:56.216780   73230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:56.239708   73230 ssh_runner.go:195] Run: openssl version
	I0906 20:04:56.246127   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:56.257597   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262515   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262594   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.269207   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:56.281646   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:56.293773   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299185   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299255   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.305740   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:56.319060   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:56.330840   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336013   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336082   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.342576   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:56.354648   73230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:56.359686   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:56.366321   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:56.372646   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:56.379199   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:56.386208   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:56.392519   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:56.399335   73230 kubeadm.go:392] StartCluster: {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:56.399442   73230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:56.399495   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.441986   73230 cri.go:89] found id: ""
	I0906 20:04:56.442069   73230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:56.454884   73230 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:56.454907   73230 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:56.454977   73230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:56.465647   73230 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:56.466650   73230 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:04:56.467285   73230 kubeconfig.go:62] /home/jenkins/minikube-integration/19576-6021/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-843298" cluster setting kubeconfig missing "old-k8s-version-843298" context setting]
	I0906 20:04:56.468248   73230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:56.565587   73230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:56.576221   73230 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.30
	I0906 20:04:56.576261   73230 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:56.576277   73230 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:56.576342   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.621597   73230 cri.go:89] found id: ""
	I0906 20:04:56.621663   73230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:56.639924   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:56.649964   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:56.649989   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:56.650042   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:56.661290   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:56.661343   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:56.671361   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:56.680865   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:56.680939   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:56.696230   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.706613   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:56.706692   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.719635   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:56.729992   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:56.730045   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:56.740040   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:56.750666   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:56.891897   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.681824   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.972206   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.091751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.206345   73230 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:58.206443   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:58.707412   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.206780   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.707273   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:00.207218   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.340092   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:01.838387   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:57.658033   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:00.157741   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:59.734045   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:59.734565   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:59.734592   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:59.734506   74322 retry.go:31] will retry after 2.767491398s: waiting for machine to come up
	I0906 20:05:02.505314   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:02.505749   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:05:02.505780   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:05:02.505697   74322 retry.go:31] will retry after 3.51382931s: waiting for machine to come up
	I0906 20:05:00.707010   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.206708   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.707125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.207349   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.706670   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.207287   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.706650   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.207125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.707193   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:05.207119   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.838639   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:05.839195   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:02.655906   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:04.656677   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:07.157732   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:06.023595   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024063   72322 main.go:141] libmachine: (no-preload-504385) Found IP for machine: 192.168.61.184
	I0906 20:05:06.024095   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has current primary IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024105   72322 main.go:141] libmachine: (no-preload-504385) Reserving static IP address...
	I0906 20:05:06.024576   72322 main.go:141] libmachine: (no-preload-504385) Reserved static IP address: 192.168.61.184
	I0906 20:05:06.024598   72322 main.go:141] libmachine: (no-preload-504385) Waiting for SSH to be available...
	I0906 20:05:06.024621   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.024643   72322 main.go:141] libmachine: (no-preload-504385) DBG | skip adding static IP to network mk-no-preload-504385 - found existing host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"}
	I0906 20:05:06.024666   72322 main.go:141] libmachine: (no-preload-504385) DBG | Getting to WaitForSSH function...
	I0906 20:05:06.026845   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027166   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.027219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027296   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH client type: external
	I0906 20:05:06.027321   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa (-rw-------)
	I0906 20:05:06.027355   72322 main.go:141] libmachine: (no-preload-504385) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:05:06.027376   72322 main.go:141] libmachine: (no-preload-504385) DBG | About to run SSH command:
	I0906 20:05:06.027403   72322 main.go:141] libmachine: (no-preload-504385) DBG | exit 0
	I0906 20:05:06.148816   72322 main.go:141] libmachine: (no-preload-504385) DBG | SSH cmd err, output: <nil>: 
	I0906 20:05:06.149196   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetConfigRaw
	I0906 20:05:06.149951   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.152588   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.152970   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.153003   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.153238   72322 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/config.json ...
	I0906 20:05:06.153485   72322 machine.go:93] provisionDockerMachine start ...
	I0906 20:05:06.153508   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:06.153714   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.156031   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156394   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.156425   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156562   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.156732   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.156901   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.157051   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.157205   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.157411   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.157425   72322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:05:06.261544   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:05:06.261586   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.261861   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:05:06.261895   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.262063   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.264812   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265192   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.265219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265400   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.265570   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265705   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265856   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.265990   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.266145   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.266157   72322 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-504385 && echo "no-preload-504385" | sudo tee /etc/hostname
	I0906 20:05:06.383428   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-504385
	
	I0906 20:05:06.383456   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.386368   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386722   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.386755   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386968   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.387152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387322   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387439   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.387617   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.387817   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.387840   72322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-504385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-504385/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-504385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:05:06.501805   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:05:06.501836   72322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:05:06.501854   72322 buildroot.go:174] setting up certificates
	I0906 20:05:06.501866   72322 provision.go:84] configureAuth start
	I0906 20:05:06.501873   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.502152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.504721   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505086   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.505115   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505250   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.507420   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507765   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.507795   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507940   72322 provision.go:143] copyHostCerts
	I0906 20:05:06.508008   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:05:06.508031   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:05:06.508087   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:05:06.508175   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:05:06.508183   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:05:06.508208   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:05:06.508297   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:05:06.508307   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:05:06.508338   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:05:06.508406   72322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.no-preload-504385 san=[127.0.0.1 192.168.61.184 localhost minikube no-preload-504385]
	I0906 20:05:06.681719   72322 provision.go:177] copyRemoteCerts
	I0906 20:05:06.681786   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:05:06.681810   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.684460   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684779   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.684822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684962   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.685125   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.685258   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.685368   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:06.767422   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:05:06.794881   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0906 20:05:06.821701   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:05:06.848044   72322 provision.go:87] duration metric: took 346.1664ms to configureAuth
	I0906 20:05:06.848075   72322 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:05:06.848271   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:05:06.848348   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.850743   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851037   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.851064   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851226   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.851395   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851549   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851674   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.851791   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.851993   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.852020   72322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:05:07.074619   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:05:07.074643   72322 machine.go:96] duration metric: took 921.143238ms to provisionDockerMachine
	I0906 20:05:07.074654   72322 start.go:293] postStartSetup for "no-preload-504385" (driver="kvm2")
	I0906 20:05:07.074664   72322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:05:07.074678   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.075017   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:05:07.075042   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.077988   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078268   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.078287   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078449   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.078634   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.078791   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.078946   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.165046   72322 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:05:07.169539   72322 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:05:07.169565   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:05:07.169631   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:05:07.169700   72322 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:05:07.169783   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:05:07.179344   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:07.204213   72322 start.go:296] duration metric: took 129.545341ms for postStartSetup
	I0906 20:05:07.204265   72322 fix.go:56] duration metric: took 19.41036755s for fixHost
	I0906 20:05:07.204287   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.207087   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207473   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.207513   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207695   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.207905   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208090   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208267   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.208436   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:07.208640   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:07.208655   72322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:05:07.314172   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653107.281354639
	
	I0906 20:05:07.314195   72322 fix.go:216] guest clock: 1725653107.281354639
	I0906 20:05:07.314205   72322 fix.go:229] Guest: 2024-09-06 20:05:07.281354639 +0000 UTC Remote: 2024-09-06 20:05:07.204269406 +0000 UTC m=+358.676673749 (delta=77.085233ms)
	I0906 20:05:07.314228   72322 fix.go:200] guest clock delta is within tolerance: 77.085233ms
	I0906 20:05:07.314237   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 19.52037381s
	I0906 20:05:07.314266   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.314552   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:07.317476   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.317839   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.317873   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.318003   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318542   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318716   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318821   72322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:05:07.318876   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.318991   72322 ssh_runner.go:195] Run: cat /version.json
	I0906 20:05:07.319018   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.321880   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322102   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322308   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322340   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322472   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322508   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322550   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322685   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.322713   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322868   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.322875   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.323062   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.323066   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.323221   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.424438   72322 ssh_runner.go:195] Run: systemctl --version
	I0906 20:05:07.430755   72322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:05:07.579436   72322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:05:07.585425   72322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:05:07.585493   72322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:05:07.601437   72322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:05:07.601462   72322 start.go:495] detecting cgroup driver to use...
	I0906 20:05:07.601529   72322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:05:07.620368   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:05:07.634848   72322 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:05:07.634912   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:05:07.648810   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:05:07.664084   72322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:05:07.796601   72322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:05:07.974836   72322 docker.go:233] disabling docker service ...
	I0906 20:05:07.974911   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:05:07.989013   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:05:08.002272   72322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:05:08.121115   72322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:05:08.247908   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:05:08.262855   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:05:08.281662   72322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:05:08.281730   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.292088   72322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:05:08.292165   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.302601   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.313143   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.323852   72322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:05:08.335791   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.347619   72322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.365940   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.376124   72322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:05:08.385677   72322 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:05:08.385743   72322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:05:08.398445   72322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:05:08.408477   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:08.518447   72322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:05:08.613636   72322 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:05:08.613707   72322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:05:08.619050   72322 start.go:563] Will wait 60s for crictl version
	I0906 20:05:08.619134   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:08.622959   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:05:08.668229   72322 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:05:08.668297   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.702416   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.733283   72322 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:05:05.707351   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.206573   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.707452   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.206554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.706854   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.206925   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.707456   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.207200   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.706741   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:10.206605   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.839381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.839918   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.157889   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:11.158761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:08.734700   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:08.737126   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737477   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:08.737504   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737692   72322 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0906 20:05:08.741940   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:08.756235   72322 kubeadm.go:883] updating cluster {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:05:08.756380   72322 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:05:08.756426   72322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:05:08.798359   72322 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:05:08.798388   72322 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:05:08.798484   72322 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.798507   72322 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.798520   72322 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0906 20:05:08.798559   72322 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.798512   72322 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.798571   72322 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.798494   72322 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.798489   72322 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800044   72322 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.800055   72322 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800048   72322 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0906 20:05:08.800067   72322 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.800070   72322 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.800043   72322 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.800046   72322 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.800050   72322 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.960723   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.967887   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.980496   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.988288   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.990844   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.000220   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.031002   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0906 20:05:09.046388   72322 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0906 20:05:09.046430   72322 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.046471   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.079069   72322 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0906 20:05:09.079112   72322 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.079161   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147423   72322 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0906 20:05:09.147470   72322 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.147521   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147529   72322 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0906 20:05:09.147549   72322 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.147584   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153575   72322 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0906 20:05:09.153612   72322 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.153659   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153662   72322 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0906 20:05:09.153697   72322 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.153736   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.272296   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.272317   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.272325   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.272368   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.272398   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.272474   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.397590   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.398793   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.398807   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.398899   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.398912   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.398969   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.515664   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.529550   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.529604   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.529762   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.532314   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.532385   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.603138   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.654698   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0906 20:05:09.654823   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:09.671020   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0906 20:05:09.671069   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0906 20:05:09.671123   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0906 20:05:09.671156   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:09.671128   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.671208   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:09.686883   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0906 20:05:09.687013   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:09.709594   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0906 20:05:09.709706   72322 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0906 20:05:09.709758   72322 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.709858   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0906 20:05:09.709877   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709868   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.709940   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0906 20:05:09.709906   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709994   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0906 20:05:09.709771   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0906 20:05:09.709973   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0906 20:05:09.709721   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:09.714755   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0906 20:05:12.389459   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.679458658s)
	I0906 20:05:12.389498   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0906 20:05:12.389522   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389524   72322 ssh_runner.go:235] Completed: which crictl: (2.679596804s)
	I0906 20:05:12.389573   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389582   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:10.706506   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.207411   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.707316   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.207239   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.706502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.206560   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.706593   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.207192   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.706940   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:15.207250   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.338753   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.339694   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.839193   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:13.656815   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.156988   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.349906   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.960304583s)
	I0906 20:05:14.349962   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960364149s)
	I0906 20:05:14.349988   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:14.350001   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0906 20:05:14.350032   72322 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.350085   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.397740   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:16.430883   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.03310928s)
	I0906 20:05:16.430943   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 20:05:16.430977   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080869318s)
	I0906 20:05:16.431004   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0906 20:05:16.431042   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:16.431042   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:16.431103   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:18.293255   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.862123731s)
	I0906 20:05:18.293274   72322 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.862211647s)
	I0906 20:05:18.293294   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0906 20:05:18.293315   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0906 20:05:18.293324   72322 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:18.293372   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:15.706728   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.207477   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.707337   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.206710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.707209   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.206544   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.707104   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.206752   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.706561   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:20.206507   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.840176   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.339033   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:18.657074   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.157488   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:19.142756   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 20:05:19.142784   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:19.142824   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:20.494611   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351756729s)
	I0906 20:05:20.494642   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0906 20:05:20.494656   72322 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.494706   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.706855   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.206585   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.706948   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.207150   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.706508   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.207459   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.706894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.206643   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.707208   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:25.206797   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.838561   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:25.838697   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:23.656303   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:26.156813   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:24.186953   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.692203906s)
	I0906 20:05:24.186987   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0906 20:05:24.187019   72322 cache_images.go:123] Successfully loaded all cached images
	I0906 20:05:24.187026   72322 cache_images.go:92] duration metric: took 15.388623154s to LoadCachedImages
	I0906 20:05:24.187040   72322 kubeadm.go:934] updating node { 192.168.61.184 8443 v1.31.0 crio true true} ...
	I0906 20:05:24.187169   72322 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-504385 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:05:24.187251   72322 ssh_runner.go:195] Run: crio config
	I0906 20:05:24.236699   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:24.236722   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:24.236746   72322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:05:24.236770   72322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.184 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-504385 NodeName:no-preload-504385 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:05:24.236943   72322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-504385"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:05:24.237005   72322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:05:24.247480   72322 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:05:24.247554   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:05:24.257088   72322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0906 20:05:24.274447   72322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:05:24.292414   72322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0906 20:05:24.310990   72322 ssh_runner.go:195] Run: grep 192.168.61.184	control-plane.minikube.internal$ /etc/hosts
	I0906 20:05:24.315481   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:24.327268   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:24.465318   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:05:24.482195   72322 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385 for IP: 192.168.61.184
	I0906 20:05:24.482216   72322 certs.go:194] generating shared ca certs ...
	I0906 20:05:24.482230   72322 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:05:24.482364   72322 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:05:24.482407   72322 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:05:24.482420   72322 certs.go:256] generating profile certs ...
	I0906 20:05:24.482522   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/client.key
	I0906 20:05:24.482603   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key.9c78613e
	I0906 20:05:24.482664   72322 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key
	I0906 20:05:24.482828   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:05:24.482878   72322 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:05:24.482894   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:05:24.482927   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:05:24.482956   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:05:24.482992   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:05:24.483043   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:24.483686   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:05:24.528742   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:05:24.561921   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:05:24.596162   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:05:24.636490   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 20:05:24.664450   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:05:24.690551   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:05:24.717308   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:05:24.741498   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:05:24.764388   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:05:24.789473   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:05:24.814772   72322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:05:24.833405   72322 ssh_runner.go:195] Run: openssl version
	I0906 20:05:24.841007   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:05:24.852635   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857351   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857404   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.863435   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:05:24.874059   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:05:24.884939   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889474   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889567   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.895161   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:05:24.905629   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:05:24.916101   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920494   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920550   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.925973   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:05:24.937017   72322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:05:24.941834   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:05:24.947779   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:05:24.954042   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:05:24.959977   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:05:24.965500   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:05:24.970996   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:05:24.976532   72322 kubeadm.go:392] StartCluster: {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:05:24.976606   72322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:05:24.976667   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.015556   72322 cri.go:89] found id: ""
	I0906 20:05:25.015653   72322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:05:25.032921   72322 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:05:25.032954   72322 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:05:25.033009   72322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:05:25.044039   72322 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:05:25.045560   72322 kubeconfig.go:125] found "no-preload-504385" server: "https://192.168.61.184:8443"
	I0906 20:05:25.049085   72322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:05:25.059027   72322 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.184
	I0906 20:05:25.059060   72322 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:05:25.059073   72322 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:05:25.059128   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.096382   72322 cri.go:89] found id: ""
	I0906 20:05:25.096446   72322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:05:25.114296   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:05:25.126150   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:05:25.126168   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:05:25.126207   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:05:25.136896   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:05:25.136964   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:05:25.148074   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:05:25.158968   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:05:25.159027   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:05:25.169642   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.179183   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:05:25.179258   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.189449   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:05:25.199237   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:05:25.199286   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:05:25.209663   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:05:25.220511   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:25.336312   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.475543   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.139195419s)
	I0906 20:05:26.475586   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.700018   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.768678   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.901831   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:05:26.901928   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.401987   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.903023   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.957637   72322 api_server.go:72] duration metric: took 1.055807s to wait for apiserver process to appear ...
	I0906 20:05:27.957664   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:05:27.957684   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:27.958196   72322 api_server.go:269] stopped: https://192.168.61.184:8443/healthz: Get "https://192.168.61.184:8443/healthz": dial tcp 192.168.61.184:8443: connect: connection refused
	I0906 20:05:28.458421   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:25.706669   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.206691   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.707336   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.206666   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.706715   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.206488   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.706489   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.207461   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.707293   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:30.206591   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.840001   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:29.840101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.768451   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:05:30.768482   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:05:30.768505   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.868390   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.868430   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:30.958611   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.964946   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.964977   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.458125   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.462130   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.462155   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.958761   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.963320   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.963347   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:32.458596   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:32.464885   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:05:32.474582   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:05:32.474616   72322 api_server.go:131] duration metric: took 4.51694462s to wait for apiserver health ...
	I0906 20:05:32.474627   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:32.474635   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:32.476583   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:05:28.157326   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.657628   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:32.477797   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:05:32.490715   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:05:32.510816   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:05:32.529192   72322 system_pods.go:59] 8 kube-system pods found
	I0906 20:05:32.529236   72322 system_pods.go:61] "coredns-6f6b679f8f-s7tnx" [ce438653-a3b9-4412-8705-7d2db7df5d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:05:32.529254   72322 system_pods.go:61] "etcd-no-preload-504385" [6ec6b2a1-c22a-44b4-b726-808a56f2be2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:05:32.529266   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [5f2baa0b-3cf3-4e0d-984b-80fa19adb3b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:05:32.529275   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [59ffbd51-6a83-43e6-8ef7-bc1cfd80b4d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:05:32.529292   72322 system_pods.go:61] "kube-proxy-dg8sg" [2e0393f3-b9bd-4603-b800-e1a2fdbf71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:05:32.529300   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [52a74c91-a6ec-4d64-8651-e1f87db21b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:05:32.529306   72322 system_pods.go:61] "metrics-server-6867b74b74-nn295" [9d0f51d1-7abf-4f63-bef7-c02f6cd89c5d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:05:32.529313   72322 system_pods.go:61] "storage-provisioner" [69ed0066-2b84-4a4d-91e5-1e25bb3f31eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:05:32.529320   72322 system_pods.go:74] duration metric: took 18.48107ms to wait for pod list to return data ...
	I0906 20:05:32.529333   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:05:32.535331   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:05:32.535363   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:05:32.535376   72322 node_conditions.go:105] duration metric: took 6.037772ms to run NodePressure ...
	I0906 20:05:32.535397   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:32.955327   72322 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962739   72322 kubeadm.go:739] kubelet initialised
	I0906 20:05:32.962767   72322 kubeadm.go:740] duration metric: took 7.415054ms waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962776   72322 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:05:32.980280   72322 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:30.707091   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.207070   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.707224   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.207295   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.707195   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.207373   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.707519   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.207428   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.706808   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:35.207396   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.340006   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.838636   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:36.838703   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:33.155769   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.156761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.994689   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.487610   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.707415   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.206955   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.706868   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.206515   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.706659   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.206735   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.706915   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.207300   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.707211   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:40.207085   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.839362   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:41.338875   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.657190   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.158940   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:39.986557   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.486518   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.706720   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.206896   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.707281   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.206751   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.706754   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.206987   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.707245   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.207502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.707112   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:45.206569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.339353   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.838975   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.657187   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.156196   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:47.157014   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:43.986675   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.986701   72322 pod_ready.go:82] duration metric: took 11.006397745s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.986710   72322 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991650   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.991671   72322 pod_ready.go:82] duration metric: took 4.955425ms for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991680   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997218   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:44.997242   72322 pod_ready.go:82] duration metric: took 1.005553613s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997253   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002155   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.002177   72322 pod_ready.go:82] duration metric: took 4.916677ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002186   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006610   72322 pod_ready.go:93] pod "kube-proxy-dg8sg" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.006631   72322 pod_ready.go:82] duration metric: took 4.439092ms for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006639   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185114   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.185139   72322 pod_ready.go:82] duration metric: took 178.494249ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185149   72322 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:47.191676   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.707450   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.207446   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.707006   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.206484   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.707168   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.207536   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.707554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.206894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.706709   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:50.206799   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.338355   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.839372   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.157301   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.157426   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.193619   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.692286   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.707012   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.206914   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.706917   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.207465   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.706682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.206565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.706757   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.206600   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.706926   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:55.207382   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.338845   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.339570   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:53.656904   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.158806   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:54.191331   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.192498   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.707103   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.206621   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.707156   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.207277   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.706568   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:58.206599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:05:58.206698   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:05:58.245828   73230 cri.go:89] found id: ""
	I0906 20:05:58.245857   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.245868   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:05:58.245875   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:05:58.245938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:05:58.283189   73230 cri.go:89] found id: ""
	I0906 20:05:58.283217   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.283228   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:05:58.283235   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:05:58.283303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:05:58.320834   73230 cri.go:89] found id: ""
	I0906 20:05:58.320868   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.320880   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:05:58.320889   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:05:58.320944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:05:58.356126   73230 cri.go:89] found id: ""
	I0906 20:05:58.356152   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.356162   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:05:58.356169   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:05:58.356227   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:05:58.395951   73230 cri.go:89] found id: ""
	I0906 20:05:58.395977   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.395987   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:05:58.395994   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:05:58.396061   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:05:58.431389   73230 cri.go:89] found id: ""
	I0906 20:05:58.431415   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.431426   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:05:58.431433   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:05:58.431511   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:05:58.466255   73230 cri.go:89] found id: ""
	I0906 20:05:58.466285   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.466294   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:05:58.466300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:05:58.466356   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:05:58.505963   73230 cri.go:89] found id: ""
	I0906 20:05:58.505989   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.505997   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:05:58.506006   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:05:58.506018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:05:58.579027   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:05:58.579061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:05:58.620332   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:05:58.620365   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:05:58.675017   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:05:58.675052   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:05:58.689944   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:05:58.689970   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:05:58.825396   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:05:57.838610   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.339329   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.656312   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.656996   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.691099   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.692040   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.192516   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:01.326375   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:01.340508   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:01.340570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:01.375429   73230 cri.go:89] found id: ""
	I0906 20:06:01.375460   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.375470   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:01.375478   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:01.375539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:01.410981   73230 cri.go:89] found id: ""
	I0906 20:06:01.411008   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.411019   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:01.411026   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:01.411083   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:01.448925   73230 cri.go:89] found id: ""
	I0906 20:06:01.448957   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.448968   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:01.448975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:01.449040   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:01.492063   73230 cri.go:89] found id: ""
	I0906 20:06:01.492094   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.492104   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:01.492112   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:01.492181   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:01.557779   73230 cri.go:89] found id: ""
	I0906 20:06:01.557812   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.557823   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:01.557830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:01.557892   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:01.604397   73230 cri.go:89] found id: ""
	I0906 20:06:01.604424   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.604432   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:01.604437   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:01.604482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:01.642249   73230 cri.go:89] found id: ""
	I0906 20:06:01.642280   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.642292   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:01.642300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:01.642364   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:01.692434   73230 cri.go:89] found id: ""
	I0906 20:06:01.692462   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.692474   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:01.692483   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:01.692498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:01.705860   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:01.705884   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:01.783929   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:01.783954   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:01.783965   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:01.864347   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:01.864385   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:01.902284   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:01.902311   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:04.456090   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:04.469775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:04.469840   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:04.505742   73230 cri.go:89] found id: ""
	I0906 20:06:04.505769   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.505778   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:04.505783   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:04.505835   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:04.541787   73230 cri.go:89] found id: ""
	I0906 20:06:04.541811   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.541819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:04.541824   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:04.541874   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:04.578775   73230 cri.go:89] found id: ""
	I0906 20:06:04.578806   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.578817   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:04.578825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:04.578885   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:04.614505   73230 cri.go:89] found id: ""
	I0906 20:06:04.614533   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.614542   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:04.614548   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:04.614594   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:04.652988   73230 cri.go:89] found id: ""
	I0906 20:06:04.653016   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.653027   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:04.653035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:04.653104   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:04.692380   73230 cri.go:89] found id: ""
	I0906 20:06:04.692408   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.692416   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:04.692423   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:04.692478   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:04.729846   73230 cri.go:89] found id: ""
	I0906 20:06:04.729869   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.729880   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:04.729887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:04.729953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:04.766341   73230 cri.go:89] found id: ""
	I0906 20:06:04.766370   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.766379   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:04.766390   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:04.766405   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:04.779801   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:04.779828   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:04.855313   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:04.855334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:04.855346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:04.934210   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:04.934246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:04.975589   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:04.975621   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:02.839427   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:04.840404   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.158048   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.655510   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.192558   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.692755   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.528622   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:07.544085   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:07.544156   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:07.588106   73230 cri.go:89] found id: ""
	I0906 20:06:07.588139   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.588149   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:07.588157   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:07.588210   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:07.630440   73230 cri.go:89] found id: ""
	I0906 20:06:07.630476   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.630494   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:07.630500   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:07.630551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:07.668826   73230 cri.go:89] found id: ""
	I0906 20:06:07.668870   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.668889   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:07.668898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:07.668962   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:07.706091   73230 cri.go:89] found id: ""
	I0906 20:06:07.706118   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.706130   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:07.706138   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:07.706196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:07.741679   73230 cri.go:89] found id: ""
	I0906 20:06:07.741708   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.741719   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:07.741726   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:07.741792   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:07.778240   73230 cri.go:89] found id: ""
	I0906 20:06:07.778277   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.778288   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:07.778296   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:07.778352   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:07.813183   73230 cri.go:89] found id: ""
	I0906 20:06:07.813212   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.813224   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:07.813232   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:07.813294   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:07.853938   73230 cri.go:89] found id: ""
	I0906 20:06:07.853970   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.853980   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:07.853988   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:07.854001   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:07.893540   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:07.893567   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:07.944219   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:07.944262   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:07.959601   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:07.959635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:08.034487   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:08.034513   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:08.034529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:07.339634   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:09.838953   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.658315   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.157980   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.192738   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.691823   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.611413   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:10.625273   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:10.625353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:10.664568   73230 cri.go:89] found id: ""
	I0906 20:06:10.664597   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.664609   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:10.664617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:10.664680   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:10.702743   73230 cri.go:89] found id: ""
	I0906 20:06:10.702772   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.702783   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:10.702790   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:10.702850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:10.739462   73230 cri.go:89] found id: ""
	I0906 20:06:10.739487   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.739504   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:10.739511   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:10.739572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:10.776316   73230 cri.go:89] found id: ""
	I0906 20:06:10.776344   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.776355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:10.776362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:10.776420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:10.809407   73230 cri.go:89] found id: ""
	I0906 20:06:10.809440   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.809451   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:10.809459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:10.809519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:10.844736   73230 cri.go:89] found id: ""
	I0906 20:06:10.844765   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.844777   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:10.844784   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:10.844851   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:10.880658   73230 cri.go:89] found id: ""
	I0906 20:06:10.880685   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.880693   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:10.880698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:10.880753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:10.917032   73230 cri.go:89] found id: ""
	I0906 20:06:10.917063   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.917074   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:10.917085   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:10.917100   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:10.980241   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:10.980272   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:10.995389   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:10.995435   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:11.070285   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:11.070313   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:11.070328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:11.155574   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:11.155607   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:13.703712   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:13.718035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:13.718093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:13.753578   73230 cri.go:89] found id: ""
	I0906 20:06:13.753603   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.753611   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:13.753617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:13.753659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:13.790652   73230 cri.go:89] found id: ""
	I0906 20:06:13.790681   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.790691   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:13.790697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:13.790749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:13.824243   73230 cri.go:89] found id: ""
	I0906 20:06:13.824278   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.824288   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:13.824293   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:13.824342   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:13.859647   73230 cri.go:89] found id: ""
	I0906 20:06:13.859691   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.859702   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:13.859721   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:13.859781   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:13.897026   73230 cri.go:89] found id: ""
	I0906 20:06:13.897061   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.897068   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:13.897075   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:13.897131   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:13.933904   73230 cri.go:89] found id: ""
	I0906 20:06:13.933927   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.933935   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:13.933941   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:13.933986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:13.969168   73230 cri.go:89] found id: ""
	I0906 20:06:13.969198   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.969210   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:13.969218   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:13.969295   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:14.005808   73230 cri.go:89] found id: ""
	I0906 20:06:14.005838   73230 logs.go:276] 0 containers: []
	W0906 20:06:14.005849   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:14.005862   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:14.005878   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:14.060878   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:14.060915   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:14.075388   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:14.075414   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:14.144942   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:14.144966   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:14.144981   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:14.233088   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:14.233139   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:12.338579   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.839062   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.655992   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.657020   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.157119   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.692103   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.193196   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:16.776744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:16.790292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:16.790384   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:16.828877   73230 cri.go:89] found id: ""
	I0906 20:06:16.828910   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.828921   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:16.828929   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:16.829016   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:16.864413   73230 cri.go:89] found id: ""
	I0906 20:06:16.864440   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.864449   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:16.864455   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:16.864525   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:16.908642   73230 cri.go:89] found id: ""
	I0906 20:06:16.908676   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.908687   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:16.908694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:16.908748   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:16.952247   73230 cri.go:89] found id: ""
	I0906 20:06:16.952278   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.952286   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:16.952292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:16.952343   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:16.990986   73230 cri.go:89] found id: ""
	I0906 20:06:16.991013   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.991022   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:16.991028   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:16.991077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:17.031002   73230 cri.go:89] found id: ""
	I0906 20:06:17.031034   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.031045   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:17.031052   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:17.031114   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:17.077533   73230 cri.go:89] found id: ""
	I0906 20:06:17.077560   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.077572   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:17.077579   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:17.077646   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:17.116770   73230 cri.go:89] found id: ""
	I0906 20:06:17.116798   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.116806   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:17.116817   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:17.116834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.169300   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:17.169337   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:17.184266   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:17.184299   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:17.266371   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:17.266400   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:17.266419   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:17.343669   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:17.343698   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:19.886541   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:19.899891   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:19.899951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:19.946592   73230 cri.go:89] found id: ""
	I0906 20:06:19.946621   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.946630   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:19.946636   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:19.946686   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:19.981758   73230 cri.go:89] found id: ""
	I0906 20:06:19.981788   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.981797   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:19.981802   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:19.981854   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:20.018372   73230 cri.go:89] found id: ""
	I0906 20:06:20.018397   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.018405   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:20.018411   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:20.018460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:20.054380   73230 cri.go:89] found id: ""
	I0906 20:06:20.054428   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.054440   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:20.054449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:20.054521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:20.092343   73230 cri.go:89] found id: ""
	I0906 20:06:20.092376   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.092387   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:20.092395   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:20.092463   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:20.128568   73230 cri.go:89] found id: ""
	I0906 20:06:20.128594   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.128604   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:20.128610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:20.128657   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:20.166018   73230 cri.go:89] found id: ""
	I0906 20:06:20.166046   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.166057   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:20.166072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:20.166125   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:20.203319   73230 cri.go:89] found id: ""
	I0906 20:06:20.203347   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.203355   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:20.203365   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:20.203381   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:20.287217   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:20.287243   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:20.287259   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:20.372799   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:20.372834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:20.416595   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:20.416620   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.338546   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.342409   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.838689   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.657411   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:22.157972   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.691327   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.692066   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:20.468340   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:20.468378   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:22.983259   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:22.997014   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:22.997098   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:23.034483   73230 cri.go:89] found id: ""
	I0906 20:06:23.034513   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.034524   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:23.034531   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:23.034597   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:23.072829   73230 cri.go:89] found id: ""
	I0906 20:06:23.072867   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.072878   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:23.072885   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:23.072949   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:23.110574   73230 cri.go:89] found id: ""
	I0906 20:06:23.110602   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.110613   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:23.110620   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:23.110684   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:23.149506   73230 cri.go:89] found id: ""
	I0906 20:06:23.149538   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.149550   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:23.149557   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:23.149619   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:23.191321   73230 cri.go:89] found id: ""
	I0906 20:06:23.191355   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.191367   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:23.191374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:23.191441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:23.233737   73230 cri.go:89] found id: ""
	I0906 20:06:23.233770   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.233791   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:23.233800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:23.233873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:23.270013   73230 cri.go:89] found id: ""
	I0906 20:06:23.270048   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.270060   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:23.270068   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:23.270127   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:23.309517   73230 cri.go:89] found id: ""
	I0906 20:06:23.309541   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.309549   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:23.309566   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:23.309578   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:23.380645   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:23.380675   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:23.380690   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:23.463656   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:23.463696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:23.504100   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:23.504134   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:23.557438   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:23.557483   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:23.841101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.340722   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.658261   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:27.155171   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.193829   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.690602   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.074045   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:26.088006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:26.088072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:26.124445   73230 cri.go:89] found id: ""
	I0906 20:06:26.124469   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.124476   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:26.124482   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:26.124537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:26.158931   73230 cri.go:89] found id: ""
	I0906 20:06:26.158957   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.158968   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:26.158975   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:26.159035   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:26.197125   73230 cri.go:89] found id: ""
	I0906 20:06:26.197154   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.197164   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:26.197171   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:26.197234   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:26.233241   73230 cri.go:89] found id: ""
	I0906 20:06:26.233278   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.233291   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:26.233300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:26.233366   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:26.269910   73230 cri.go:89] found id: ""
	I0906 20:06:26.269943   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.269955   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:26.269962   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:26.270026   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:26.308406   73230 cri.go:89] found id: ""
	I0906 20:06:26.308439   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.308450   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:26.308459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:26.308521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:26.344248   73230 cri.go:89] found id: ""
	I0906 20:06:26.344276   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.344288   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:26.344295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:26.344353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:26.391794   73230 cri.go:89] found id: ""
	I0906 20:06:26.391827   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.391840   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:26.391851   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:26.391866   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:26.444192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:26.444231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:26.459113   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:26.459144   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:26.533920   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:26.533945   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:26.533960   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:26.616382   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:26.616416   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:29.160429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:29.175007   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:29.175063   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:29.212929   73230 cri.go:89] found id: ""
	I0906 20:06:29.212961   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.212972   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:29.212980   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:29.213042   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:29.250777   73230 cri.go:89] found id: ""
	I0906 20:06:29.250806   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.250815   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:29.250821   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:29.250870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:29.292222   73230 cri.go:89] found id: ""
	I0906 20:06:29.292253   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.292262   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:29.292268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:29.292331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:29.328379   73230 cri.go:89] found id: ""
	I0906 20:06:29.328413   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.328431   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:29.328436   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:29.328482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:29.366792   73230 cri.go:89] found id: ""
	I0906 20:06:29.366822   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.366834   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:29.366841   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:29.366903   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:29.402233   73230 cri.go:89] found id: ""
	I0906 20:06:29.402261   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.402270   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:29.402276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:29.402331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:29.436695   73230 cri.go:89] found id: ""
	I0906 20:06:29.436724   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.436731   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:29.436736   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:29.436787   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:29.473050   73230 cri.go:89] found id: ""
	I0906 20:06:29.473074   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.473082   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:29.473091   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:29.473101   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:29.524981   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:29.525018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:29.538698   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:29.538722   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:29.611026   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:29.611049   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:29.611064   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:29.686898   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:29.686931   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:28.839118   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:30.839532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:29.156985   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.656552   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:28.694188   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.191032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.192623   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:32.228399   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:32.244709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:32.244775   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:32.285681   73230 cri.go:89] found id: ""
	I0906 20:06:32.285713   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.285724   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:32.285732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:32.285794   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:32.325312   73230 cri.go:89] found id: ""
	I0906 20:06:32.325340   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.325349   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:32.325355   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:32.325400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:32.361420   73230 cri.go:89] found id: ""
	I0906 20:06:32.361455   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.361468   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:32.361477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:32.361543   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:32.398881   73230 cri.go:89] found id: ""
	I0906 20:06:32.398956   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.398971   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:32.398984   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:32.399041   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:32.435336   73230 cri.go:89] found id: ""
	I0906 20:06:32.435362   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.435370   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:32.435375   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:32.435427   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:32.472849   73230 cri.go:89] found id: ""
	I0906 20:06:32.472900   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.472909   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:32.472914   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:32.472964   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:32.508176   73230 cri.go:89] found id: ""
	I0906 20:06:32.508199   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.508208   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:32.508213   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:32.508271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:32.550519   73230 cri.go:89] found id: ""
	I0906 20:06:32.550550   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.550561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:32.550576   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:32.550593   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:32.601362   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:32.601394   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:32.614821   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:32.614849   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:32.686044   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:32.686061   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:32.686074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:32.767706   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:32.767744   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:35.309159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:35.322386   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:35.322462   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:35.362909   73230 cri.go:89] found id: ""
	I0906 20:06:35.362937   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.362948   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:35.362955   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:35.363017   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:35.400591   73230 cri.go:89] found id: ""
	I0906 20:06:35.400621   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.400629   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:35.400635   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:35.400682   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:35.436547   73230 cri.go:89] found id: ""
	I0906 20:06:35.436578   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.436589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:35.436596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:35.436666   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:33.338812   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.340154   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.656782   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.657043   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.691312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:37.691358   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.473130   73230 cri.go:89] found id: ""
	I0906 20:06:35.473155   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.473163   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:35.473168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:35.473244   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:35.509646   73230 cri.go:89] found id: ""
	I0906 20:06:35.509677   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.509687   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:35.509695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:35.509754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:35.547651   73230 cri.go:89] found id: ""
	I0906 20:06:35.547684   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.547696   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:35.547703   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:35.547761   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:35.608590   73230 cri.go:89] found id: ""
	I0906 20:06:35.608614   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.608624   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:35.608631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:35.608691   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:35.651508   73230 cri.go:89] found id: ""
	I0906 20:06:35.651550   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.651561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:35.651572   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:35.651585   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:35.705502   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:35.705542   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:35.719550   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:35.719577   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:35.791435   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:35.791461   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:35.791476   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:35.869018   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:35.869070   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:38.411587   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:38.425739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:38.425800   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:38.463534   73230 cri.go:89] found id: ""
	I0906 20:06:38.463560   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.463571   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:38.463578   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:38.463628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:38.499238   73230 cri.go:89] found id: ""
	I0906 20:06:38.499269   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.499280   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:38.499287   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:38.499340   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:38.536297   73230 cri.go:89] found id: ""
	I0906 20:06:38.536334   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.536345   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:38.536352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:38.536417   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:38.573672   73230 cri.go:89] found id: ""
	I0906 20:06:38.573701   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.573712   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:38.573720   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:38.573779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:38.610913   73230 cri.go:89] found id: ""
	I0906 20:06:38.610937   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.610945   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:38.610950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:38.610996   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:38.647335   73230 cri.go:89] found id: ""
	I0906 20:06:38.647359   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.647368   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:38.647374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:38.647418   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:38.684054   73230 cri.go:89] found id: ""
	I0906 20:06:38.684084   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.684097   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:38.684106   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:38.684174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:38.731134   73230 cri.go:89] found id: ""
	I0906 20:06:38.731161   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.731173   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:38.731183   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:38.731199   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:38.787757   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:38.787798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:38.802920   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:38.802955   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:38.889219   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:38.889246   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:38.889261   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:38.964999   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:38.965042   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:37.838886   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.338914   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:38.156615   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.656577   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:39.691609   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.692330   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.504406   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:41.518111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:41.518169   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:41.558701   73230 cri.go:89] found id: ""
	I0906 20:06:41.558727   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.558738   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:41.558746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:41.558807   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:41.595986   73230 cri.go:89] found id: ""
	I0906 20:06:41.596009   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.596017   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:41.596023   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:41.596070   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:41.631462   73230 cri.go:89] found id: ""
	I0906 20:06:41.631486   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.631494   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:41.631504   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:41.631559   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:41.669646   73230 cri.go:89] found id: ""
	I0906 20:06:41.669674   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.669686   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:41.669693   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:41.669754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:41.708359   73230 cri.go:89] found id: ""
	I0906 20:06:41.708383   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.708391   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:41.708398   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:41.708446   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:41.745712   73230 cri.go:89] found id: ""
	I0906 20:06:41.745737   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.745750   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:41.745756   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:41.745804   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:41.781862   73230 cri.go:89] found id: ""
	I0906 20:06:41.781883   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.781892   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:41.781898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:41.781946   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:41.816687   73230 cri.go:89] found id: ""
	I0906 20:06:41.816714   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.816722   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:41.816730   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:41.816742   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:41.830115   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:41.830145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:41.908303   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:41.908334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:41.908348   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:42.001459   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:42.001501   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:42.061341   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:42.061368   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:44.619574   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:44.633355   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:44.633423   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:44.668802   73230 cri.go:89] found id: ""
	I0906 20:06:44.668834   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.668845   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:44.668852   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:44.668924   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:44.707613   73230 cri.go:89] found id: ""
	I0906 20:06:44.707639   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.707650   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:44.707657   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:44.707727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:44.744202   73230 cri.go:89] found id: ""
	I0906 20:06:44.744231   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.744243   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:44.744250   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:44.744311   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:44.783850   73230 cri.go:89] found id: ""
	I0906 20:06:44.783873   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.783881   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:44.783886   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:44.783938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:44.824986   73230 cri.go:89] found id: ""
	I0906 20:06:44.825011   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.825019   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:44.825025   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:44.825073   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:44.865157   73230 cri.go:89] found id: ""
	I0906 20:06:44.865182   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.865190   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:44.865196   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:44.865258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:44.908268   73230 cri.go:89] found id: ""
	I0906 20:06:44.908295   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.908305   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:44.908312   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:44.908359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:44.948669   73230 cri.go:89] found id: ""
	I0906 20:06:44.948697   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.948706   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:44.948716   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:44.948731   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:44.961862   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:44.961887   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:45.036756   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:45.036783   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:45.036801   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:45.116679   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:45.116717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:45.159756   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:45.159784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:42.339271   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.839443   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:43.155878   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:45.158884   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.192211   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:46.692140   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.714682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:47.730754   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:47.730820   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:47.783208   73230 cri.go:89] found id: ""
	I0906 20:06:47.783239   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.783249   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:47.783255   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:47.783312   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:47.844291   73230 cri.go:89] found id: ""
	I0906 20:06:47.844324   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.844336   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:47.844344   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:47.844407   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:47.881877   73230 cri.go:89] found id: ""
	I0906 20:06:47.881905   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.881913   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:47.881919   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:47.881986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:47.918034   73230 cri.go:89] found id: ""
	I0906 20:06:47.918058   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.918066   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:47.918072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:47.918126   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:47.957045   73230 cri.go:89] found id: ""
	I0906 20:06:47.957068   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.957077   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:47.957083   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:47.957134   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:47.993849   73230 cri.go:89] found id: ""
	I0906 20:06:47.993872   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.993883   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:47.993890   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:47.993951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:48.031214   73230 cri.go:89] found id: ""
	I0906 20:06:48.031239   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.031249   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:48.031257   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:48.031314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:48.064634   73230 cri.go:89] found id: ""
	I0906 20:06:48.064673   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.064690   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:48.064698   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:48.064710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:48.104307   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:48.104343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:48.158869   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:48.158900   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:48.173000   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:48.173026   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:48.248751   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:48.248774   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:48.248792   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:47.339014   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.339656   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.838817   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.656402   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.156349   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:52.156651   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.192411   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.691635   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.833490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:50.847618   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:50.847702   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:50.887141   73230 cri.go:89] found id: ""
	I0906 20:06:50.887167   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.887176   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:50.887181   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:50.887228   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:50.923435   73230 cri.go:89] found id: ""
	I0906 20:06:50.923480   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.923491   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:50.923499   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:50.923567   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:50.959704   73230 cri.go:89] found id: ""
	I0906 20:06:50.959730   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.959742   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:50.959748   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:50.959810   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:50.992994   73230 cri.go:89] found id: ""
	I0906 20:06:50.993023   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.993032   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:50.993037   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:50.993091   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:51.031297   73230 cri.go:89] found id: ""
	I0906 20:06:51.031321   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.031329   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:51.031335   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:51.031390   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:51.067698   73230 cri.go:89] found id: ""
	I0906 20:06:51.067721   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.067732   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:51.067739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:51.067799   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:51.102240   73230 cri.go:89] found id: ""
	I0906 20:06:51.102268   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.102278   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:51.102285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:51.102346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:51.137146   73230 cri.go:89] found id: ""
	I0906 20:06:51.137172   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.137183   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:51.137194   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:51.137209   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:51.216158   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:51.216194   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:51.256063   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:51.256088   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:51.309176   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:51.309210   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:51.323515   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:51.323544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:51.393281   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:53.893714   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:53.907807   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:53.907863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:53.947929   73230 cri.go:89] found id: ""
	I0906 20:06:53.947954   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.947962   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:53.947968   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:53.948014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:53.983005   73230 cri.go:89] found id: ""
	I0906 20:06:53.983028   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.983041   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:53.983046   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:53.983094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:54.019004   73230 cri.go:89] found id: ""
	I0906 20:06:54.019027   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.019035   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:54.019041   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:54.019094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:54.060240   73230 cri.go:89] found id: ""
	I0906 20:06:54.060266   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.060279   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:54.060285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:54.060336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:54.096432   73230 cri.go:89] found id: ""
	I0906 20:06:54.096461   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.096469   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:54.096475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:54.096537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:54.132992   73230 cri.go:89] found id: ""
	I0906 20:06:54.133021   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.133033   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:54.133040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:54.133103   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:54.172730   73230 cri.go:89] found id: ""
	I0906 20:06:54.172754   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.172766   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:54.172778   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:54.172839   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:54.212050   73230 cri.go:89] found id: ""
	I0906 20:06:54.212191   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.212202   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:54.212212   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:54.212234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:54.263603   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:54.263647   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:54.281291   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:54.281324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:54.359523   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:54.359545   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:54.359568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:54.442230   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:54.442265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:54.339159   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.841459   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.157379   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.656134   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.191878   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.691766   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.983744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:56.997451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:56.997527   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:57.034792   73230 cri.go:89] found id: ""
	I0906 20:06:57.034817   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.034825   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:57.034831   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:57.034883   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:57.073709   73230 cri.go:89] found id: ""
	I0906 20:06:57.073735   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.073745   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:57.073751   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:57.073803   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:57.122758   73230 cri.go:89] found id: ""
	I0906 20:06:57.122787   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.122798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:57.122808   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:57.122865   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:57.158208   73230 cri.go:89] found id: ""
	I0906 20:06:57.158242   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.158252   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:57.158262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:57.158323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:57.194004   73230 cri.go:89] found id: ""
	I0906 20:06:57.194029   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.194037   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:57.194044   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:57.194099   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:57.230068   73230 cri.go:89] found id: ""
	I0906 20:06:57.230099   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.230111   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:57.230119   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:57.230186   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:57.265679   73230 cri.go:89] found id: ""
	I0906 20:06:57.265707   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.265718   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:57.265735   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:57.265801   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:57.304917   73230 cri.go:89] found id: ""
	I0906 20:06:57.304946   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.304956   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:57.304967   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:57.304980   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:57.357238   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:57.357276   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:57.371648   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:57.371674   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:57.438572   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:57.438590   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:57.438602   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:57.528212   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:57.528256   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:00.071140   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:00.084975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:00.085055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:00.119680   73230 cri.go:89] found id: ""
	I0906 20:07:00.119713   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.119725   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:00.119732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:00.119786   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:00.155678   73230 cri.go:89] found id: ""
	I0906 20:07:00.155704   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.155716   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:00.155723   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:00.155769   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:00.190758   73230 cri.go:89] found id: ""
	I0906 20:07:00.190783   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.190793   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:00.190799   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:00.190863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:00.228968   73230 cri.go:89] found id: ""
	I0906 20:07:00.228999   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.229010   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:00.229018   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:00.229079   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:00.265691   73230 cri.go:89] found id: ""
	I0906 20:07:00.265722   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.265733   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:00.265741   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:00.265806   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:00.305785   73230 cri.go:89] found id: ""
	I0906 20:07:00.305812   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.305820   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:00.305825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:00.305872   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:00.341872   73230 cri.go:89] found id: ""
	I0906 20:07:00.341895   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.341902   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:00.341907   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:00.341955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:00.377661   73230 cri.go:89] found id: ""
	I0906 20:07:00.377690   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.377702   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:00.377712   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:00.377725   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:00.428215   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:00.428254   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:00.443135   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:00.443165   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 20:06:59.337996   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.338924   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:58.657236   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.156973   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:59.191556   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.192082   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.193511   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	W0906 20:07:00.518745   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:00.518768   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:00.518781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:00.604413   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:00.604448   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.146657   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:03.160610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:03.160665   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:03.200916   73230 cri.go:89] found id: ""
	I0906 20:07:03.200950   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.200960   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:03.200967   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:03.201029   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:03.239550   73230 cri.go:89] found id: ""
	I0906 20:07:03.239579   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.239592   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:03.239600   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:03.239660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:03.278216   73230 cri.go:89] found id: ""
	I0906 20:07:03.278244   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.278255   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:03.278263   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:03.278325   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:03.315028   73230 cri.go:89] found id: ""
	I0906 20:07:03.315059   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.315073   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:03.315080   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:03.315146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:03.354614   73230 cri.go:89] found id: ""
	I0906 20:07:03.354638   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.354647   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:03.354652   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:03.354710   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:03.390105   73230 cri.go:89] found id: ""
	I0906 20:07:03.390129   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.390138   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:03.390144   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:03.390190   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:03.427651   73230 cri.go:89] found id: ""
	I0906 20:07:03.427679   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.427687   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:03.427695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:03.427763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:03.463191   73230 cri.go:89] found id: ""
	I0906 20:07:03.463220   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.463230   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:03.463242   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:03.463288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:03.476966   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:03.476995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:03.558415   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:03.558441   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:03.558457   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:03.641528   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:03.641564   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.680916   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:03.680943   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:03.339511   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.340113   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.157907   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.160507   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.692151   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:08.191782   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:06.235947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:06.249589   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:06.249667   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:06.289193   73230 cri.go:89] found id: ""
	I0906 20:07:06.289223   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.289235   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:06.289242   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:06.289305   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:06.324847   73230 cri.go:89] found id: ""
	I0906 20:07:06.324887   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.324898   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:06.324904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:06.324966   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:06.361755   73230 cri.go:89] found id: ""
	I0906 20:07:06.361786   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.361798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:06.361806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:06.361873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:06.397739   73230 cri.go:89] found id: ""
	I0906 20:07:06.397766   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.397775   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:06.397780   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:06.397833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:06.432614   73230 cri.go:89] found id: ""
	I0906 20:07:06.432641   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.432649   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:06.432655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:06.432703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:06.467784   73230 cri.go:89] found id: ""
	I0906 20:07:06.467812   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.467823   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:06.467830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:06.467890   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:06.507055   73230 cri.go:89] found id: ""
	I0906 20:07:06.507085   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.507096   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:06.507104   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:06.507165   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:06.544688   73230 cri.go:89] found id: ""
	I0906 20:07:06.544720   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.544730   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:06.544740   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:06.544751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:06.597281   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:06.597314   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:06.612749   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:06.612774   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:06.684973   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:06.684993   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:06.685006   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:06.764306   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:06.764345   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.304340   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:09.317460   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:09.317536   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:09.354289   73230 cri.go:89] found id: ""
	I0906 20:07:09.354312   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.354322   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:09.354327   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:09.354373   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:09.390962   73230 cri.go:89] found id: ""
	I0906 20:07:09.390997   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.391008   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:09.391015   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:09.391076   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:09.427456   73230 cri.go:89] found id: ""
	I0906 20:07:09.427491   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.427502   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:09.427510   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:09.427572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:09.462635   73230 cri.go:89] found id: ""
	I0906 20:07:09.462667   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.462680   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:09.462687   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:09.462749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:09.506726   73230 cri.go:89] found id: ""
	I0906 20:07:09.506751   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.506767   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:09.506775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:09.506836   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:09.541974   73230 cri.go:89] found id: ""
	I0906 20:07:09.541999   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.542009   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:09.542017   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:09.542077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:09.580069   73230 cri.go:89] found id: ""
	I0906 20:07:09.580104   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.580115   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:09.580123   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:09.580182   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:09.616025   73230 cri.go:89] found id: ""
	I0906 20:07:09.616054   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.616065   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:09.616075   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:09.616090   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:09.630967   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:09.630993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:09.716733   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:09.716766   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:09.716782   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:09.792471   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:09.792503   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.832326   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:09.832357   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:07.840909   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.339239   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:07.655710   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:09.656069   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:11.656458   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.192155   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.192716   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.385565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:12.398694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:12.398768   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:12.437446   73230 cri.go:89] found id: ""
	I0906 20:07:12.437473   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.437482   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:12.437487   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:12.437555   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:12.473328   73230 cri.go:89] found id: ""
	I0906 20:07:12.473355   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.473362   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:12.473372   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:12.473429   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:12.510935   73230 cri.go:89] found id: ""
	I0906 20:07:12.510962   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.510972   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:12.510979   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:12.511044   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:12.547961   73230 cri.go:89] found id: ""
	I0906 20:07:12.547991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.547999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:12.548005   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:12.548062   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:12.585257   73230 cri.go:89] found id: ""
	I0906 20:07:12.585291   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.585302   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:12.585309   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:12.585369   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:12.623959   73230 cri.go:89] found id: ""
	I0906 20:07:12.623991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.624003   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:12.624010   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:12.624066   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:12.662795   73230 cri.go:89] found id: ""
	I0906 20:07:12.662822   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.662832   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:12.662840   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:12.662896   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:12.700941   73230 cri.go:89] found id: ""
	I0906 20:07:12.700967   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.700974   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:12.700983   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:12.700994   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:12.785989   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:12.786025   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:12.826678   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:12.826704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:12.881558   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:12.881599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:12.896035   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:12.896065   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:12.970721   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:12.839031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.339615   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:13.656809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.657470   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:14.691032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:16.692697   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.471171   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:15.484466   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:15.484541   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:15.518848   73230 cri.go:89] found id: ""
	I0906 20:07:15.518875   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.518886   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:15.518894   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:15.518953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:15.553444   73230 cri.go:89] found id: ""
	I0906 20:07:15.553468   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.553476   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:15.553482   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:15.553528   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:15.589136   73230 cri.go:89] found id: ""
	I0906 20:07:15.589160   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.589168   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:15.589173   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:15.589220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:15.624410   73230 cri.go:89] found id: ""
	I0906 20:07:15.624434   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.624443   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:15.624449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:15.624492   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:15.661506   73230 cri.go:89] found id: ""
	I0906 20:07:15.661535   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.661547   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:15.661555   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:15.661615   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:15.699126   73230 cri.go:89] found id: ""
	I0906 20:07:15.699148   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.699155   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:15.699161   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:15.699207   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:15.736489   73230 cri.go:89] found id: ""
	I0906 20:07:15.736523   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.736534   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:15.736542   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:15.736604   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:15.771988   73230 cri.go:89] found id: ""
	I0906 20:07:15.772013   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.772020   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:15.772029   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:15.772045   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:15.822734   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:15.822765   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:15.836820   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:15.836872   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:15.915073   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:15.915111   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:15.915126   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:15.988476   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:15.988514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:18.528710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:18.541450   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:18.541526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:18.581278   73230 cri.go:89] found id: ""
	I0906 20:07:18.581308   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.581317   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:18.581323   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:18.581381   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:18.616819   73230 cri.go:89] found id: ""
	I0906 20:07:18.616843   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.616850   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:18.616871   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:18.616923   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:18.655802   73230 cri.go:89] found id: ""
	I0906 20:07:18.655827   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.655842   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:18.655849   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:18.655908   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:18.693655   73230 cri.go:89] found id: ""
	I0906 20:07:18.693679   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.693689   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:18.693696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:18.693779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:18.730882   73230 cri.go:89] found id: ""
	I0906 20:07:18.730914   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.730924   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:18.730931   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:18.730994   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:18.767219   73230 cri.go:89] found id: ""
	I0906 20:07:18.767243   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.767250   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:18.767256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:18.767316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:18.802207   73230 cri.go:89] found id: ""
	I0906 20:07:18.802230   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.802238   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:18.802243   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:18.802300   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:18.840449   73230 cri.go:89] found id: ""
	I0906 20:07:18.840471   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.840481   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:18.840491   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:18.840504   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:18.892430   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:18.892469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:18.906527   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:18.906561   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:18.980462   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:18.980483   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:18.980494   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:19.059550   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:19.059588   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:17.340292   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:19.840090   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.156486   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:20.657764   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.693021   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.191529   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.191865   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.599879   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:21.614131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:21.614205   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:21.650887   73230 cri.go:89] found id: ""
	I0906 20:07:21.650910   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.650919   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:21.650924   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:21.650978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:21.684781   73230 cri.go:89] found id: ""
	I0906 20:07:21.684809   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.684819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:21.684827   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:21.684907   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:21.722685   73230 cri.go:89] found id: ""
	I0906 20:07:21.722711   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.722722   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:21.722729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:21.722791   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:21.757581   73230 cri.go:89] found id: ""
	I0906 20:07:21.757607   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.757616   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:21.757622   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:21.757670   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:21.791984   73230 cri.go:89] found id: ""
	I0906 20:07:21.792008   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.792016   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:21.792022   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:21.792072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:21.853612   73230 cri.go:89] found id: ""
	I0906 20:07:21.853636   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.853644   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:21.853650   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:21.853699   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:21.894184   73230 cri.go:89] found id: ""
	I0906 20:07:21.894232   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.894247   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:21.894256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:21.894318   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:21.930731   73230 cri.go:89] found id: ""
	I0906 20:07:21.930758   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.930768   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:21.930779   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:21.930798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:21.969174   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:21.969207   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:22.017647   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:22.017680   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:22.033810   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:22.033852   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:22.111503   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:22.111530   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:22.111544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:24.696348   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:24.710428   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:24.710506   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:24.747923   73230 cri.go:89] found id: ""
	I0906 20:07:24.747958   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.747969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:24.747977   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:24.748037   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:24.782216   73230 cri.go:89] found id: ""
	I0906 20:07:24.782250   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.782260   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:24.782268   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:24.782329   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:24.822093   73230 cri.go:89] found id: ""
	I0906 20:07:24.822126   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.822137   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:24.822148   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:24.822217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:24.857166   73230 cri.go:89] found id: ""
	I0906 20:07:24.857202   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.857213   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:24.857224   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:24.857314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:24.892575   73230 cri.go:89] found id: ""
	I0906 20:07:24.892610   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.892621   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:24.892629   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:24.892689   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:24.929102   73230 cri.go:89] found id: ""
	I0906 20:07:24.929130   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.929140   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:24.929149   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:24.929206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:24.964224   73230 cri.go:89] found id: ""
	I0906 20:07:24.964257   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.964268   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:24.964276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:24.964337   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:25.000453   73230 cri.go:89] found id: ""
	I0906 20:07:25.000475   73230 logs.go:276] 0 containers: []
	W0906 20:07:25.000485   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:25.000496   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:25.000511   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:25.041824   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:25.041851   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:25.093657   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:25.093692   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:25.107547   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:25.107576   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:25.178732   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:25.178755   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:25.178771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:22.338864   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:24.339432   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:26.838165   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.156449   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.156979   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.158086   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.192653   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.693480   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.764271   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:27.777315   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:27.777389   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:27.812621   73230 cri.go:89] found id: ""
	I0906 20:07:27.812644   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.812655   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:27.812663   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:27.812718   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:27.853063   73230 cri.go:89] found id: ""
	I0906 20:07:27.853093   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.853104   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:27.853112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:27.853171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:27.894090   73230 cri.go:89] found id: ""
	I0906 20:07:27.894118   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.894130   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:27.894137   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:27.894196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:27.930764   73230 cri.go:89] found id: ""
	I0906 20:07:27.930791   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.930802   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:27.930809   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:27.930870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:27.967011   73230 cri.go:89] found id: ""
	I0906 20:07:27.967036   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.967047   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:27.967053   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:27.967111   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:28.002119   73230 cri.go:89] found id: ""
	I0906 20:07:28.002146   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.002157   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:28.002164   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:28.002226   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:28.043884   73230 cri.go:89] found id: ""
	I0906 20:07:28.043909   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.043917   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:28.043923   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:28.043979   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:28.081510   73230 cri.go:89] found id: ""
	I0906 20:07:28.081538   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.081547   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:28.081557   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:28.081568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:28.159077   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:28.159109   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:28.207489   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:28.207527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:28.267579   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:28.267613   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:28.287496   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:28.287529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:28.376555   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:28.838301   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.843091   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:29.655598   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:31.657757   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.192112   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:32.692354   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.876683   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:30.890344   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:30.890424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:30.930618   73230 cri.go:89] found id: ""
	I0906 20:07:30.930647   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.930658   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:30.930666   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:30.930727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:30.968801   73230 cri.go:89] found id: ""
	I0906 20:07:30.968825   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.968834   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:30.968839   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:30.968911   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:31.006437   73230 cri.go:89] found id: ""
	I0906 20:07:31.006463   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.006472   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:31.006477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:31.006531   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:31.042091   73230 cri.go:89] found id: ""
	I0906 20:07:31.042117   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.042125   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:31.042131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:31.042177   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:31.079244   73230 cri.go:89] found id: ""
	I0906 20:07:31.079271   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.079280   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:31.079286   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:31.079336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:31.116150   73230 cri.go:89] found id: ""
	I0906 20:07:31.116174   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.116182   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:31.116188   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:31.116240   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:31.151853   73230 cri.go:89] found id: ""
	I0906 20:07:31.151877   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.151886   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:31.151892   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:31.151939   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:31.189151   73230 cri.go:89] found id: ""
	I0906 20:07:31.189181   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.189192   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:31.189203   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:31.189218   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:31.234466   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:31.234493   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:31.286254   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:31.286288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:31.300500   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:31.300525   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:31.372968   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:31.372987   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:31.372997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:33.949865   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:33.964791   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:33.964849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:34.027049   73230 cri.go:89] found id: ""
	I0906 20:07:34.027082   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.027094   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:34.027102   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:34.027162   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:34.080188   73230 cri.go:89] found id: ""
	I0906 20:07:34.080218   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.080230   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:34.080237   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:34.080320   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:34.124146   73230 cri.go:89] found id: ""
	I0906 20:07:34.124171   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.124179   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:34.124185   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:34.124230   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:34.161842   73230 cri.go:89] found id: ""
	I0906 20:07:34.161864   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.161872   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:34.161878   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:34.161938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:34.201923   73230 cri.go:89] found id: ""
	I0906 20:07:34.201951   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.201961   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:34.201967   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:34.202032   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:34.246609   73230 cri.go:89] found id: ""
	I0906 20:07:34.246644   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.246656   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:34.246665   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:34.246739   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:34.287616   73230 cri.go:89] found id: ""
	I0906 20:07:34.287646   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.287657   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:34.287663   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:34.287721   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:34.322270   73230 cri.go:89] found id: ""
	I0906 20:07:34.322297   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.322309   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:34.322320   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:34.322334   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:34.378598   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:34.378633   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:34.392748   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:34.392781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:34.468620   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:34.468648   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:34.468663   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:34.548290   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:34.548324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:33.339665   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.339890   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:34.157895   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:36.656829   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.192386   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.192574   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.095962   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:37.110374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:37.110459   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:37.146705   73230 cri.go:89] found id: ""
	I0906 20:07:37.146732   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.146740   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:37.146746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:37.146802   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:37.185421   73230 cri.go:89] found id: ""
	I0906 20:07:37.185449   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.185461   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:37.185468   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:37.185532   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:37.224767   73230 cri.go:89] found id: ""
	I0906 20:07:37.224793   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.224801   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:37.224806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:37.224884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:37.265392   73230 cri.go:89] found id: ""
	I0906 20:07:37.265422   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.265432   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:37.265438   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:37.265496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:37.302065   73230 cri.go:89] found id: ""
	I0906 20:07:37.302093   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.302101   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:37.302107   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:37.302171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:37.341466   73230 cri.go:89] found id: ""
	I0906 20:07:37.341493   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.341505   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:37.341513   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:37.341576   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.377701   73230 cri.go:89] found id: ""
	I0906 20:07:37.377724   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.377732   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:37.377738   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:37.377798   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:37.412927   73230 cri.go:89] found id: ""
	I0906 20:07:37.412955   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.412966   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:37.412977   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:37.412993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:37.427750   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:37.427776   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:37.500904   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:37.500928   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:37.500945   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:37.583204   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:37.583246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:37.623477   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:37.623512   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.179798   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:40.194295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:40.194372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:40.229731   73230 cri.go:89] found id: ""
	I0906 20:07:40.229768   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.229779   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:40.229787   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:40.229848   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:40.275909   73230 cri.go:89] found id: ""
	I0906 20:07:40.275943   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.275956   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:40.275964   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:40.276049   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:40.316552   73230 cri.go:89] found id: ""
	I0906 20:07:40.316585   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.316594   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:40.316599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:40.316647   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:40.355986   73230 cri.go:89] found id: ""
	I0906 20:07:40.356017   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.356028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:40.356036   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:40.356095   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:40.396486   73230 cri.go:89] found id: ""
	I0906 20:07:40.396522   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.396535   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:40.396544   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:40.396609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:40.440311   73230 cri.go:89] found id: ""
	I0906 20:07:40.440338   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.440346   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:40.440352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:40.440414   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.346532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.839521   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.156737   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.156967   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.691703   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.691972   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:40.476753   73230 cri.go:89] found id: ""
	I0906 20:07:40.476781   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.476790   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:40.476797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:40.476844   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:40.514462   73230 cri.go:89] found id: ""
	I0906 20:07:40.514489   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.514500   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:40.514511   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:40.514527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:40.553670   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:40.553700   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.608304   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:40.608343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:40.622486   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:40.622514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:40.699408   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:40.699434   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:40.699451   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.278892   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:43.292455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:43.292526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:43.328900   73230 cri.go:89] found id: ""
	I0906 20:07:43.328929   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.328940   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:43.328948   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:43.329009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:43.366728   73230 cri.go:89] found id: ""
	I0906 20:07:43.366754   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.366762   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:43.366768   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:43.366817   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:43.401566   73230 cri.go:89] found id: ""
	I0906 20:07:43.401590   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.401599   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:43.401604   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:43.401650   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:43.437022   73230 cri.go:89] found id: ""
	I0906 20:07:43.437051   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.437063   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:43.437072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:43.437140   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:43.473313   73230 cri.go:89] found id: ""
	I0906 20:07:43.473342   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.473354   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:43.473360   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:43.473420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:43.513590   73230 cri.go:89] found id: ""
	I0906 20:07:43.513616   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.513624   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:43.513630   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:43.513690   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:43.549974   73230 cri.go:89] found id: ""
	I0906 20:07:43.550011   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.550025   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:43.550032   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:43.550100   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:43.592386   73230 cri.go:89] found id: ""
	I0906 20:07:43.592426   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.592444   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:43.592454   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:43.592482   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:43.607804   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:43.607841   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:43.679533   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:43.679568   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:43.679580   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.762111   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:43.762145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:43.802883   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:43.802908   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:42.340252   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:44.838648   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.838831   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.157956   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.657410   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.693014   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.693640   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.191509   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.358429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:46.371252   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:46.371326   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:46.406397   73230 cri.go:89] found id: ""
	I0906 20:07:46.406420   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.406430   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:46.406437   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:46.406496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:46.452186   73230 cri.go:89] found id: ""
	I0906 20:07:46.452209   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.452218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:46.452223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:46.452288   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:46.489418   73230 cri.go:89] found id: ""
	I0906 20:07:46.489443   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.489454   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:46.489461   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:46.489523   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:46.529650   73230 cri.go:89] found id: ""
	I0906 20:07:46.529679   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.529690   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:46.529698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:46.529760   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:46.566429   73230 cri.go:89] found id: ""
	I0906 20:07:46.566454   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.566466   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:46.566474   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:46.566539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:46.604999   73230 cri.go:89] found id: ""
	I0906 20:07:46.605026   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.605034   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:46.605040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:46.605085   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:46.643116   73230 cri.go:89] found id: ""
	I0906 20:07:46.643144   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.643155   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:46.643162   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:46.643222   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:46.679734   73230 cri.go:89] found id: ""
	I0906 20:07:46.679756   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.679764   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:46.679772   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:46.679784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:46.736380   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:46.736430   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:46.750649   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:46.750681   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:46.833098   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:46.833130   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:46.833146   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:46.912223   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:46.912267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.453662   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:49.466520   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:49.466585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:49.508009   73230 cri.go:89] found id: ""
	I0906 20:07:49.508038   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.508049   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:49.508056   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:49.508119   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:49.545875   73230 cri.go:89] found id: ""
	I0906 20:07:49.545900   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.545911   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:49.545918   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:49.545978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:49.584899   73230 cri.go:89] found id: ""
	I0906 20:07:49.584926   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.584933   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:49.584940   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:49.585001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:49.621044   73230 cri.go:89] found id: ""
	I0906 20:07:49.621073   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.621085   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:49.621092   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:49.621146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:49.657074   73230 cri.go:89] found id: ""
	I0906 20:07:49.657099   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.657108   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:49.657115   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:49.657174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:49.693734   73230 cri.go:89] found id: ""
	I0906 20:07:49.693759   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.693767   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:49.693773   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:49.693827   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:49.729920   73230 cri.go:89] found id: ""
	I0906 20:07:49.729950   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.729960   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:49.729965   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:49.730014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:49.765282   73230 cri.go:89] found id: ""
	I0906 20:07:49.765313   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.765324   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:49.765335   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:49.765350   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:49.842509   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:49.842531   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:49.842543   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:49.920670   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:49.920704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.961193   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:49.961220   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:50.014331   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:50.014366   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:48.839877   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:51.339381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.156290   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.157337   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.692055   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:53.191487   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.529758   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:52.543533   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:52.543596   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:52.582802   73230 cri.go:89] found id: ""
	I0906 20:07:52.582826   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.582838   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:52.582845   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:52.582909   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:52.625254   73230 cri.go:89] found id: ""
	I0906 20:07:52.625287   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.625308   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:52.625317   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:52.625383   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:52.660598   73230 cri.go:89] found id: ""
	I0906 20:07:52.660621   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.660632   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:52.660640   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:52.660703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:52.702980   73230 cri.go:89] found id: ""
	I0906 20:07:52.703004   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.703014   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:52.703021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:52.703082   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:52.740361   73230 cri.go:89] found id: ""
	I0906 20:07:52.740387   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.740394   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:52.740400   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:52.740447   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:52.780011   73230 cri.go:89] found id: ""
	I0906 20:07:52.780043   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.780056   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:52.780063   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:52.780123   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:52.825546   73230 cri.go:89] found id: ""
	I0906 20:07:52.825583   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.825595   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:52.825602   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:52.825659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:52.864347   73230 cri.go:89] found id: ""
	I0906 20:07:52.864381   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.864393   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:52.864403   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:52.864417   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:52.943041   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:52.943077   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:52.986158   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:52.986185   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:53.039596   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:53.039635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:53.054265   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:53.054295   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:53.125160   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:53.339887   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.839233   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.657521   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.157101   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.192803   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.692328   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.626058   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:55.639631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:55.639705   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:55.677283   73230 cri.go:89] found id: ""
	I0906 20:07:55.677304   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.677312   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:55.677317   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:55.677372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:55.714371   73230 cri.go:89] found id: ""
	I0906 20:07:55.714402   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.714414   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:55.714422   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:55.714509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:55.753449   73230 cri.go:89] found id: ""
	I0906 20:07:55.753487   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.753500   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:55.753507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:55.753575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:55.792955   73230 cri.go:89] found id: ""
	I0906 20:07:55.792987   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.792999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:55.793006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:55.793074   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:55.827960   73230 cri.go:89] found id: ""
	I0906 20:07:55.827985   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.827996   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:55.828003   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:55.828052   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:55.867742   73230 cri.go:89] found id: ""
	I0906 20:07:55.867765   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.867778   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:55.867785   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:55.867849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:55.907328   73230 cri.go:89] found id: ""
	I0906 20:07:55.907352   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.907359   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:55.907365   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:55.907424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:55.946057   73230 cri.go:89] found id: ""
	I0906 20:07:55.946091   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.946099   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:55.946108   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:55.946119   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:56.033579   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:56.033598   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:56.033611   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:56.116337   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:56.116372   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:56.163397   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:56.163428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:56.217189   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:56.217225   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:58.736147   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:58.749729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:58.749833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:58.786375   73230 cri.go:89] found id: ""
	I0906 20:07:58.786399   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.786406   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:58.786412   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:58.786460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:58.825188   73230 cri.go:89] found id: ""
	I0906 20:07:58.825210   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.825218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:58.825223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:58.825271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:58.866734   73230 cri.go:89] found id: ""
	I0906 20:07:58.866756   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.866764   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:58.866769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:58.866823   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:58.909742   73230 cri.go:89] found id: ""
	I0906 20:07:58.909774   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.909785   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:58.909793   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:58.909850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:58.950410   73230 cri.go:89] found id: ""
	I0906 20:07:58.950438   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.950447   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:58.950452   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:58.950500   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:58.987431   73230 cri.go:89] found id: ""
	I0906 20:07:58.987454   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.987462   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:58.987468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:58.987518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:59.023432   73230 cri.go:89] found id: ""
	I0906 20:07:59.023462   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.023474   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:59.023482   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:59.023544   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:59.057695   73230 cri.go:89] found id: ""
	I0906 20:07:59.057724   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.057734   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:59.057743   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:59.057755   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:59.109634   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:59.109671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:59.125436   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:59.125479   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:59.202018   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:59.202040   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:59.202054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:59.281418   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:59.281456   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:58.339751   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.842794   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.658145   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.155679   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.157913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.192179   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.193068   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:01.823947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:01.839055   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:01.839115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:01.876178   73230 cri.go:89] found id: ""
	I0906 20:08:01.876206   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.876215   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:01.876220   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:01.876274   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:01.912000   73230 cri.go:89] found id: ""
	I0906 20:08:01.912028   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.912038   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:01.912045   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:01.912107   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:01.948382   73230 cri.go:89] found id: ""
	I0906 20:08:01.948412   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.948420   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:01.948426   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:01.948474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:01.982991   73230 cri.go:89] found id: ""
	I0906 20:08:01.983019   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.983028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:01.983033   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:01.983080   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:02.016050   73230 cri.go:89] found id: ""
	I0906 20:08:02.016076   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.016085   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:02.016091   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:02.016151   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:02.051087   73230 cri.go:89] found id: ""
	I0906 20:08:02.051125   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.051137   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:02.051150   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:02.051214   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:02.093230   73230 cri.go:89] found id: ""
	I0906 20:08:02.093254   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.093263   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:02.093268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:02.093323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:02.130580   73230 cri.go:89] found id: ""
	I0906 20:08:02.130609   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.130619   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:02.130629   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:02.130644   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:02.183192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:02.183231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:02.199079   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:02.199110   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:02.274259   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:02.274279   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:02.274303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:02.356198   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:02.356234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:04.899180   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:04.912879   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:04.912955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:04.950598   73230 cri.go:89] found id: ""
	I0906 20:08:04.950632   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.950642   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:04.950656   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:04.950713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:04.986474   73230 cri.go:89] found id: ""
	I0906 20:08:04.986504   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.986513   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:04.986519   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:04.986570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:05.025837   73230 cri.go:89] found id: ""
	I0906 20:08:05.025868   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.025877   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:05.025884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:05.025934   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:05.063574   73230 cri.go:89] found id: ""
	I0906 20:08:05.063613   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.063622   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:05.063628   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:05.063674   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:05.101341   73230 cri.go:89] found id: ""
	I0906 20:08:05.101371   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.101383   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:05.101390   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:05.101461   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:05.148551   73230 cri.go:89] found id: ""
	I0906 20:08:05.148580   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.148591   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:05.148599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:05.148668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:05.186907   73230 cri.go:89] found id: ""
	I0906 20:08:05.186935   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.186945   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:05.186953   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:05.187019   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:05.226237   73230 cri.go:89] found id: ""
	I0906 20:08:05.226265   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.226275   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:05.226287   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:05.226300   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:05.242892   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:05.242925   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:05.317797   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:05.317824   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:05.317839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:05.400464   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:05.400500   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:05.442632   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:05.442657   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:03.340541   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:05.840156   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.655913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:06.657424   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.691255   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.191739   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.998033   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:08.012363   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:08.012441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:08.048816   73230 cri.go:89] found id: ""
	I0906 20:08:08.048847   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.048876   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:08.048884   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:08.048947   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:08.109623   73230 cri.go:89] found id: ""
	I0906 20:08:08.109650   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.109661   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:08.109668   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:08.109730   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:08.145405   73230 cri.go:89] found id: ""
	I0906 20:08:08.145432   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.145443   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:08.145451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:08.145514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:08.187308   73230 cri.go:89] found id: ""
	I0906 20:08:08.187344   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.187355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:08.187362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:08.187422   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:08.228782   73230 cri.go:89] found id: ""
	I0906 20:08:08.228815   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.228826   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:08.228833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:08.228918   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:08.269237   73230 cri.go:89] found id: ""
	I0906 20:08:08.269266   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.269276   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:08.269285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:08.269351   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:08.305115   73230 cri.go:89] found id: ""
	I0906 20:08:08.305141   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.305149   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:08.305155   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:08.305206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:08.345442   73230 cri.go:89] found id: ""
	I0906 20:08:08.345472   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.345483   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:08.345494   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:08.345510   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:08.396477   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:08.396518   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:08.410978   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:08.411002   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:08.486220   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:08.486247   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:08.486265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:08.574138   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:08.574190   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:08.339280   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:10.340142   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.156809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.160037   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.192303   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.192456   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.192684   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.117545   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:11.131884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:11.131944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:11.169481   73230 cri.go:89] found id: ""
	I0906 20:08:11.169507   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.169518   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:11.169525   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:11.169590   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:11.211068   73230 cri.go:89] found id: ""
	I0906 20:08:11.211092   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.211100   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:11.211105   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:11.211157   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:11.250526   73230 cri.go:89] found id: ""
	I0906 20:08:11.250560   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.250574   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:11.250580   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:11.250627   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:11.289262   73230 cri.go:89] found id: ""
	I0906 20:08:11.289284   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.289292   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:11.289299   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:11.289346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:11.335427   73230 cri.go:89] found id: ""
	I0906 20:08:11.335456   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.335467   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:11.335475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:11.335535   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:11.375481   73230 cri.go:89] found id: ""
	I0906 20:08:11.375509   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.375518   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:11.375524   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:11.375575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:11.416722   73230 cri.go:89] found id: ""
	I0906 20:08:11.416748   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.416758   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:11.416765   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:11.416830   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:11.452986   73230 cri.go:89] found id: ""
	I0906 20:08:11.453019   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.453030   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:11.453042   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:11.453059   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:11.466435   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:11.466461   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:11.545185   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:11.545212   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:11.545231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:11.627390   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:11.627422   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:11.674071   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:11.674098   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.225887   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:14.242121   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:14.242200   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:14.283024   73230 cri.go:89] found id: ""
	I0906 20:08:14.283055   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.283067   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:14.283074   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:14.283135   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:14.325357   73230 cri.go:89] found id: ""
	I0906 20:08:14.325379   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.325387   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:14.325392   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:14.325455   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:14.362435   73230 cri.go:89] found id: ""
	I0906 20:08:14.362459   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.362467   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:14.362473   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:14.362537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:14.398409   73230 cri.go:89] found id: ""
	I0906 20:08:14.398441   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.398450   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:14.398455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:14.398509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:14.434902   73230 cri.go:89] found id: ""
	I0906 20:08:14.434934   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.434943   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:14.434950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:14.435009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:14.476605   73230 cri.go:89] found id: ""
	I0906 20:08:14.476635   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.476647   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:14.476655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:14.476717   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:14.533656   73230 cri.go:89] found id: ""
	I0906 20:08:14.533681   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.533690   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:14.533696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:14.533753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:14.599661   73230 cri.go:89] found id: ""
	I0906 20:08:14.599685   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.599693   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:14.599702   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:14.599715   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.657680   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:14.657712   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:14.671594   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:14.671624   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:14.747945   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:14.747969   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:14.747979   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:14.829021   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:14.829057   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:12.838805   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:14.839569   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.659405   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:16.156840   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:15.692205   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.693709   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.373569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:17.388910   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:17.388987   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:17.428299   73230 cri.go:89] found id: ""
	I0906 20:08:17.428335   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.428347   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:17.428354   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:17.428419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:17.464660   73230 cri.go:89] found id: ""
	I0906 20:08:17.464685   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.464692   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:17.464697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:17.464758   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:17.500018   73230 cri.go:89] found id: ""
	I0906 20:08:17.500047   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.500059   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:17.500067   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:17.500130   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:17.536345   73230 cri.go:89] found id: ""
	I0906 20:08:17.536375   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.536386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:17.536394   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:17.536456   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:17.574668   73230 cri.go:89] found id: ""
	I0906 20:08:17.574696   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.574707   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:17.574715   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:17.574780   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:17.611630   73230 cri.go:89] found id: ""
	I0906 20:08:17.611653   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.611663   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:17.611669   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:17.611713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:17.647610   73230 cri.go:89] found id: ""
	I0906 20:08:17.647639   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.647649   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:17.647657   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:17.647724   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:17.686204   73230 cri.go:89] found id: ""
	I0906 20:08:17.686233   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.686246   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:17.686260   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:17.686273   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:17.702040   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:17.702069   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:17.775033   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:17.775058   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:17.775074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:17.862319   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:17.862359   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:17.905567   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:17.905604   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:17.339116   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:19.839554   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:21.839622   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:18.157104   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.657604   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.191024   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:22.192687   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.457191   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:20.471413   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:20.471474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:20.533714   73230 cri.go:89] found id: ""
	I0906 20:08:20.533749   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.533765   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:20.533772   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:20.533833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:20.580779   73230 cri.go:89] found id: ""
	I0906 20:08:20.580811   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.580823   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:20.580830   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:20.580902   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:20.619729   73230 cri.go:89] found id: ""
	I0906 20:08:20.619755   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.619763   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:20.619769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:20.619816   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:20.661573   73230 cri.go:89] found id: ""
	I0906 20:08:20.661599   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.661606   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:20.661612   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:20.661664   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:20.709409   73230 cri.go:89] found id: ""
	I0906 20:08:20.709443   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.709455   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:20.709463   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:20.709515   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:20.746743   73230 cri.go:89] found id: ""
	I0906 20:08:20.746783   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.746808   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:20.746816   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:20.746891   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:20.788129   73230 cri.go:89] found id: ""
	I0906 20:08:20.788155   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.788164   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:20.788170   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:20.788217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:20.825115   73230 cri.go:89] found id: ""
	I0906 20:08:20.825139   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.825147   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:20.825156   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:20.825167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:20.880975   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:20.881013   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:20.895027   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:20.895061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:20.972718   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:20.972739   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:20.972754   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:21.053062   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:21.053096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:23.595439   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:23.612354   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:23.612419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:23.654479   73230 cri.go:89] found id: ""
	I0906 20:08:23.654508   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.654519   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:23.654526   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:23.654591   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:23.690061   73230 cri.go:89] found id: ""
	I0906 20:08:23.690092   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.690103   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:23.690112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:23.690173   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:23.726644   73230 cri.go:89] found id: ""
	I0906 20:08:23.726670   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.726678   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:23.726684   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:23.726744   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:23.763348   73230 cri.go:89] found id: ""
	I0906 20:08:23.763378   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.763386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:23.763391   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:23.763452   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:23.799260   73230 cri.go:89] found id: ""
	I0906 20:08:23.799290   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.799299   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:23.799305   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:23.799359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:23.843438   73230 cri.go:89] found id: ""
	I0906 20:08:23.843470   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.843481   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:23.843489   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:23.843558   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:23.879818   73230 cri.go:89] found id: ""
	I0906 20:08:23.879847   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.879856   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:23.879867   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:23.879933   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:23.916182   73230 cri.go:89] found id: ""
	I0906 20:08:23.916207   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.916220   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:23.916229   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:23.916240   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:23.987003   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:23.987022   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:23.987033   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:24.073644   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:24.073684   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:24.118293   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:24.118328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:24.172541   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:24.172582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:23.840441   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.338539   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:23.155661   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:25.155855   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:27.157624   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:24.692350   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.692534   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.687747   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:26.702174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:26.702238   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:26.740064   73230 cri.go:89] found id: ""
	I0906 20:08:26.740093   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.740101   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:26.740108   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:26.740158   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:26.775198   73230 cri.go:89] found id: ""
	I0906 20:08:26.775227   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.775237   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:26.775244   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:26.775303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:26.808850   73230 cri.go:89] found id: ""
	I0906 20:08:26.808892   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.808903   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:26.808915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:26.808974   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:26.842926   73230 cri.go:89] found id: ""
	I0906 20:08:26.842953   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.842964   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:26.842972   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:26.843031   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:26.878621   73230 cri.go:89] found id: ""
	I0906 20:08:26.878649   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.878658   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:26.878664   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:26.878713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:26.921816   73230 cri.go:89] found id: ""
	I0906 20:08:26.921862   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.921875   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:26.921884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:26.921952   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:26.960664   73230 cri.go:89] found id: ""
	I0906 20:08:26.960692   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.960702   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:26.960709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:26.960771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:27.004849   73230 cri.go:89] found id: ""
	I0906 20:08:27.004904   73230 logs.go:276] 0 containers: []
	W0906 20:08:27.004913   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:27.004922   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:27.004934   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:27.056237   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:27.056267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:27.071882   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:27.071904   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:27.143927   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:27.143949   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:27.143961   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:27.223901   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:27.223935   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:29.766615   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:29.780295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:29.780367   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:29.817745   73230 cri.go:89] found id: ""
	I0906 20:08:29.817775   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.817784   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:29.817790   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:29.817852   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:29.855536   73230 cri.go:89] found id: ""
	I0906 20:08:29.855559   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.855567   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:29.855572   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:29.855628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:29.895043   73230 cri.go:89] found id: ""
	I0906 20:08:29.895092   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.895104   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:29.895111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:29.895178   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:29.939225   73230 cri.go:89] found id: ""
	I0906 20:08:29.939248   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.939256   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:29.939262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:29.939331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:29.974166   73230 cri.go:89] found id: ""
	I0906 20:08:29.974190   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.974198   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:29.974203   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:29.974258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:30.009196   73230 cri.go:89] found id: ""
	I0906 20:08:30.009226   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.009237   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:30.009245   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:30.009310   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:30.043939   73230 cri.go:89] found id: ""
	I0906 20:08:30.043962   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.043970   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:30.043976   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:30.044023   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:30.080299   73230 cri.go:89] found id: ""
	I0906 20:08:30.080328   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.080336   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:30.080345   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:30.080356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:30.131034   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:30.131068   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:30.145502   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:30.145536   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:30.219941   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:30.219963   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:30.219978   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:30.307958   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:30.307995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:28.839049   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.338815   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.656748   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.657112   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.192284   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.193181   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.854002   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:32.867937   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:32.867998   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:32.906925   73230 cri.go:89] found id: ""
	I0906 20:08:32.906957   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.906969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:32.906976   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:32.907038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:32.946662   73230 cri.go:89] found id: ""
	I0906 20:08:32.946691   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.946702   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:32.946710   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:32.946771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:32.981908   73230 cri.go:89] found id: ""
	I0906 20:08:32.981936   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.981944   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:32.981950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:32.982001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:33.014902   73230 cri.go:89] found id: ""
	I0906 20:08:33.014930   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.014939   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:33.014945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:33.015055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:33.051265   73230 cri.go:89] found id: ""
	I0906 20:08:33.051290   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.051298   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:33.051310   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:33.051363   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:33.085436   73230 cri.go:89] found id: ""
	I0906 20:08:33.085468   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.085480   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:33.085487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:33.085552   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:33.121483   73230 cri.go:89] found id: ""
	I0906 20:08:33.121509   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.121517   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:33.121523   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:33.121578   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:33.159883   73230 cri.go:89] found id: ""
	I0906 20:08:33.159915   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.159926   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:33.159937   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:33.159953   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:33.174411   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:33.174442   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:33.243656   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:33.243694   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:33.243710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:33.321782   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:33.321823   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:33.363299   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:33.363335   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:33.339645   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.839545   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.650358   72441 pod_ready.go:82] duration metric: took 4m0.000296679s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:32.650386   72441 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:32.650410   72441 pod_ready.go:39] duration metric: took 4m12.042795571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:32.650440   72441 kubeadm.go:597] duration metric: took 4m19.97234293s to restartPrimaryControlPlane
	W0906 20:08:32.650505   72441 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:32.650542   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:33.692877   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:36.192090   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:38.192465   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.916159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:35.929190   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:35.929265   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:35.962853   73230 cri.go:89] found id: ""
	I0906 20:08:35.962890   73230 logs.go:276] 0 containers: []
	W0906 20:08:35.962901   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:35.962909   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:35.962969   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:36.000265   73230 cri.go:89] found id: ""
	I0906 20:08:36.000309   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.000318   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:36.000324   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:36.000374   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:36.042751   73230 cri.go:89] found id: ""
	I0906 20:08:36.042781   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.042792   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:36.042800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:36.042859   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:36.077922   73230 cri.go:89] found id: ""
	I0906 20:08:36.077957   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.077967   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:36.077975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:36.078038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:36.114890   73230 cri.go:89] found id: ""
	I0906 20:08:36.114926   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.114937   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:36.114945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:36.114997   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:36.148058   73230 cri.go:89] found id: ""
	I0906 20:08:36.148089   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.148101   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:36.148108   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:36.148167   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:36.187334   73230 cri.go:89] found id: ""
	I0906 20:08:36.187361   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.187371   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:36.187379   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:36.187498   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:36.221295   73230 cri.go:89] found id: ""
	I0906 20:08:36.221331   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.221342   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:36.221353   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:36.221367   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:36.273489   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:36.273527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:36.287975   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:36.288005   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:36.366914   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:36.366937   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:36.366950   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:36.446582   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:36.446619   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.987075   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:39.001051   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:39.001113   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:39.038064   73230 cri.go:89] found id: ""
	I0906 20:08:39.038093   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.038103   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:39.038110   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:39.038175   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:39.075759   73230 cri.go:89] found id: ""
	I0906 20:08:39.075788   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.075799   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:39.075805   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:39.075866   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:39.113292   73230 cri.go:89] found id: ""
	I0906 20:08:39.113320   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.113331   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:39.113339   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:39.113404   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:39.157236   73230 cri.go:89] found id: ""
	I0906 20:08:39.157269   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.157281   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:39.157289   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:39.157362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:39.195683   73230 cri.go:89] found id: ""
	I0906 20:08:39.195704   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.195712   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:39.195717   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:39.195763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:39.234865   73230 cri.go:89] found id: ""
	I0906 20:08:39.234894   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.234903   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:39.234909   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:39.234961   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:39.269946   73230 cri.go:89] found id: ""
	I0906 20:08:39.269975   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.269983   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:39.269989   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:39.270034   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:39.306184   73230 cri.go:89] found id: ""
	I0906 20:08:39.306214   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.306225   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:39.306235   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:39.306249   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:39.357887   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:39.357920   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:39.371736   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:39.371767   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:39.445674   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:39.445695   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:39.445708   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:39.525283   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:39.525316   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.343370   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.839247   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.691846   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.694807   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.069066   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:42.083229   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:42.083313   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:42.124243   73230 cri.go:89] found id: ""
	I0906 20:08:42.124267   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.124275   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:42.124280   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:42.124330   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:42.162070   73230 cri.go:89] found id: ""
	I0906 20:08:42.162102   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.162113   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:42.162120   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:42.162183   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:42.199161   73230 cri.go:89] found id: ""
	I0906 20:08:42.199191   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.199201   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:42.199208   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:42.199266   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:42.236956   73230 cri.go:89] found id: ""
	I0906 20:08:42.236980   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.236991   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:42.236996   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:42.237068   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:42.272299   73230 cri.go:89] found id: ""
	I0906 20:08:42.272328   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.272336   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:42.272341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:42.272400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:42.310280   73230 cri.go:89] found id: ""
	I0906 20:08:42.310304   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.310312   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:42.310317   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:42.310362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:42.345850   73230 cri.go:89] found id: ""
	I0906 20:08:42.345873   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.345881   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:42.345887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:42.345937   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:42.380785   73230 cri.go:89] found id: ""
	I0906 20:08:42.380812   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.380820   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:42.380830   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:42.380843   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.435803   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:42.435839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:42.450469   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:42.450498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:42.521565   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:42.521587   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:42.521599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:42.595473   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:42.595508   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:45.136985   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:45.150468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:45.150540   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:45.186411   73230 cri.go:89] found id: ""
	I0906 20:08:45.186440   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.186448   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:45.186454   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:45.186521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:45.224463   73230 cri.go:89] found id: ""
	I0906 20:08:45.224495   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.224506   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:45.224513   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:45.224568   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:45.262259   73230 cri.go:89] found id: ""
	I0906 20:08:45.262286   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.262295   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:45.262301   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:45.262357   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:45.299463   73230 cri.go:89] found id: ""
	I0906 20:08:45.299492   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.299501   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:45.299507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:45.299561   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:45.336125   73230 cri.go:89] found id: ""
	I0906 20:08:45.336153   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.336162   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:45.336168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:45.336216   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:45.370397   73230 cri.go:89] found id: ""
	I0906 20:08:45.370427   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.370439   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:45.370448   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:45.370518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:45.406290   73230 cri.go:89] found id: ""
	I0906 20:08:45.406322   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.406333   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:45.406341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:45.406402   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:45.441560   73230 cri.go:89] found id: ""
	I0906 20:08:45.441592   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.441603   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:45.441614   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:45.441627   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.840127   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.349331   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.192059   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:47.691416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.508769   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:45.508811   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:45.523659   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:45.523696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:45.595544   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:45.595567   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:45.595582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:45.676060   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:45.676096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:48.216490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:48.230021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:48.230093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:48.267400   73230 cri.go:89] found id: ""
	I0906 20:08:48.267433   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.267444   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:48.267451   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:48.267519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:48.314694   73230 cri.go:89] found id: ""
	I0906 20:08:48.314722   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.314731   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:48.314739   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:48.314805   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:48.358861   73230 cri.go:89] found id: ""
	I0906 20:08:48.358895   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.358906   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:48.358915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:48.358990   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:48.398374   73230 cri.go:89] found id: ""
	I0906 20:08:48.398400   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.398410   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:48.398416   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:48.398488   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:48.438009   73230 cri.go:89] found id: ""
	I0906 20:08:48.438039   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.438050   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:48.438058   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:48.438115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:48.475970   73230 cri.go:89] found id: ""
	I0906 20:08:48.475998   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.476007   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:48.476013   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:48.476071   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:48.512191   73230 cri.go:89] found id: ""
	I0906 20:08:48.512220   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.512230   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:48.512237   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:48.512299   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:48.547820   73230 cri.go:89] found id: ""
	I0906 20:08:48.547850   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.547861   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:48.547872   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:48.547886   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:48.616962   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:48.616997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:48.631969   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:48.631998   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:48.717025   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:48.717043   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:48.717054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:48.796131   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:48.796167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:47.838558   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.839063   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.839099   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.693239   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:52.191416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.342030   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:51.355761   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:51.355845   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:51.395241   73230 cri.go:89] found id: ""
	I0906 20:08:51.395272   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.395283   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:51.395290   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:51.395350   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:51.433860   73230 cri.go:89] found id: ""
	I0906 20:08:51.433888   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.433897   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:51.433904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:51.433968   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:51.475568   73230 cri.go:89] found id: ""
	I0906 20:08:51.475598   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.475608   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:51.475615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:51.475678   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:51.512305   73230 cri.go:89] found id: ""
	I0906 20:08:51.512329   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.512337   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:51.512342   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:51.512391   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:51.545796   73230 cri.go:89] found id: ""
	I0906 20:08:51.545819   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.545827   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:51.545833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:51.545884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:51.578506   73230 cri.go:89] found id: ""
	I0906 20:08:51.578531   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.578539   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:51.578545   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:51.578609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:51.616571   73230 cri.go:89] found id: ""
	I0906 20:08:51.616596   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.616609   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:51.616615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:51.616660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:51.651542   73230 cri.go:89] found id: ""
	I0906 20:08:51.651566   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.651580   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:51.651588   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:51.651599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:51.705160   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:51.705193   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:51.719450   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:51.719477   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:51.789775   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:51.789796   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:51.789809   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:51.870123   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:51.870158   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.411818   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:54.425759   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:54.425818   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:54.467920   73230 cri.go:89] found id: ""
	I0906 20:08:54.467943   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.467951   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:54.467956   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:54.468008   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:54.508324   73230 cri.go:89] found id: ""
	I0906 20:08:54.508349   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.508357   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:54.508363   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:54.508410   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:54.544753   73230 cri.go:89] found id: ""
	I0906 20:08:54.544780   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.544790   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:54.544797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:54.544884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:54.581407   73230 cri.go:89] found id: ""
	I0906 20:08:54.581436   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.581446   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:54.581453   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:54.581514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:54.618955   73230 cri.go:89] found id: ""
	I0906 20:08:54.618986   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.618998   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:54.619006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:54.619065   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:54.656197   73230 cri.go:89] found id: ""
	I0906 20:08:54.656229   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.656248   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:54.656255   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:54.656316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:54.697499   73230 cri.go:89] found id: ""
	I0906 20:08:54.697536   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.697544   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:54.697549   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:54.697600   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:54.734284   73230 cri.go:89] found id: ""
	I0906 20:08:54.734313   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.734331   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:54.734342   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:54.734356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:54.811079   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:54.811100   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:54.811111   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:54.887309   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:54.887346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.930465   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:54.930499   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:55.000240   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:55.000303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:54.339076   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:54.833352   72867 pod_ready.go:82] duration metric: took 4m0.000854511s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:54.833398   72867 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:54.833423   72867 pod_ready.go:39] duration metric: took 4m14.79685184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:54.833458   72867 kubeadm.go:597] duration metric: took 4m22.254900492s to restartPrimaryControlPlane
	W0906 20:08:54.833525   72867 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:54.833576   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:54.192038   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:56.192120   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:58.193505   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:57.530956   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:57.544056   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:57.544136   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:57.584492   73230 cri.go:89] found id: ""
	I0906 20:08:57.584519   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.584528   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:57.584534   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:57.584585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:57.620220   73230 cri.go:89] found id: ""
	I0906 20:08:57.620250   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.620259   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:57.620265   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:57.620321   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:57.655245   73230 cri.go:89] found id: ""
	I0906 20:08:57.655268   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.655283   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:57.655288   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:57.655346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:57.690439   73230 cri.go:89] found id: ""
	I0906 20:08:57.690470   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.690481   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:57.690487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:57.690551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:57.728179   73230 cri.go:89] found id: ""
	I0906 20:08:57.728206   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.728214   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:57.728221   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:57.728270   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:57.763723   73230 cri.go:89] found id: ""
	I0906 20:08:57.763752   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.763761   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:57.763767   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:57.763825   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:57.799836   73230 cri.go:89] found id: ""
	I0906 20:08:57.799861   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.799869   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:57.799876   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:57.799922   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:57.834618   73230 cri.go:89] found id: ""
	I0906 20:08:57.834644   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.834651   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:57.834660   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:57.834671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:57.887297   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:57.887331   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:57.901690   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:57.901717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:57.969179   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:57.969209   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:57.969223   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:58.052527   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:58.052642   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:58.870446   72441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.219876198s)
	I0906 20:08:58.870530   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:08:58.888197   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:08:58.899185   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:08:58.909740   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:08:58.909762   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:08:58.909806   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:08:58.919589   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:08:58.919646   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:08:58.930386   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:08:58.940542   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:08:58.940621   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:08:58.951673   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.963471   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:08:58.963545   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.974638   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:08:58.984780   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:08:58.984843   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:08:58.995803   72441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:08:59.046470   72441 kubeadm.go:310] W0906 20:08:59.003226    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.047297   72441 kubeadm.go:310] W0906 20:08:59.004193    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.166500   72441 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:00.691499   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:02.692107   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:00.593665   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:00.608325   73230 kubeadm.go:597] duration metric: took 4m4.153407014s to restartPrimaryControlPlane
	W0906 20:09:00.608399   73230 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:00.608428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:05.878028   73230 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.269561172s)
	I0906 20:09:05.878112   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:05.893351   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:05.904668   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:05.915560   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:05.915583   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:05.915633   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:09:05.926566   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:05.926625   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:05.937104   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:09:05.946406   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:05.946467   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:05.956203   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.965691   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:05.965751   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.976210   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:09:05.986104   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:05.986174   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:05.996282   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:06.068412   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:09:06.068507   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:06.213882   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:06.214044   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:06.214191   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:09:06.406793   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.067295   72441 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:07.067370   72441 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:07.067449   72441 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:07.067595   72441 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:07.067737   72441 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:07.067795   72441 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.069381   72441 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:07.069477   72441 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:07.069559   72441 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:07.069652   72441 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:07.069733   72441 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:07.069825   72441 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:07.069898   72441 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:07.069981   72441 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:07.070068   72441 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:07.070178   72441 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:07.070279   72441 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:07.070349   72441 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:07.070424   72441 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:07.070494   72441 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:07.070592   72441 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:07.070669   72441 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.070755   72441 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.070828   72441 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.070916   72441 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.070972   72441 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:07.072214   72441 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.072317   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.072399   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.072487   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.072613   72441 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.072685   72441 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.072719   72441 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.072837   72441 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:07.072977   72441 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:07.073063   72441 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.515053ms
	I0906 20:09:07.073178   72441 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:07.073257   72441 kubeadm.go:310] [api-check] The API server is healthy after 5.001748851s
	I0906 20:09:07.073410   72441 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:07.073558   72441 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:07.073650   72441 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:07.073860   72441 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-458066 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:07.073936   72441 kubeadm.go:310] [bootstrap-token] Using token: 3t2lf6.w44vkc4kfppuo2gp
	I0906 20:09:07.075394   72441 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:07.075524   72441 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:07.075621   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:07.075738   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:07.075905   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:07.076003   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:07.076094   72441 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:07.076222   72441 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:07.076397   72441 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:07.076486   72441 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:07.076502   72441 kubeadm.go:310] 
	I0906 20:09:07.076579   72441 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:07.076594   72441 kubeadm.go:310] 
	I0906 20:09:07.076687   72441 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:07.076698   72441 kubeadm.go:310] 
	I0906 20:09:07.076727   72441 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:07.076810   72441 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:07.076893   72441 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:07.076900   72441 kubeadm.go:310] 
	I0906 20:09:07.077016   72441 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:07.077029   72441 kubeadm.go:310] 
	I0906 20:09:07.077090   72441 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:07.077105   72441 kubeadm.go:310] 
	I0906 20:09:07.077172   72441 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:07.077273   72441 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:07.077368   72441 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:07.077377   72441 kubeadm.go:310] 
	I0906 20:09:07.077496   72441 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:07.077589   72441 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:07.077600   72441 kubeadm.go:310] 
	I0906 20:09:07.077680   72441 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.077767   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:07.077807   72441 kubeadm.go:310] 	--control-plane 
	I0906 20:09:07.077817   72441 kubeadm.go:310] 
	I0906 20:09:07.077927   72441 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:07.077946   72441 kubeadm.go:310] 
	I0906 20:09:07.078053   72441 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.078191   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:07.078206   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:09:07.078216   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:07.079782   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:07.080965   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:07.092500   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:07.112546   72441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:07.112618   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:07.112648   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-458066 minikube.k8s.io/updated_at=2024_09_06T20_09_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=embed-certs-458066 minikube.k8s.io/primary=true
	I0906 20:09:07.343125   72441 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:07.343284   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:06.408933   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:06.409043   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:06.409126   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:06.409242   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:06.409351   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:06.409445   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:06.409559   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:06.409666   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:06.409758   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:06.409870   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:06.409964   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:06.410010   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:06.410101   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:06.721268   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:06.888472   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.414908   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.505887   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.525704   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.525835   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.525913   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.699971   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:04.692422   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.193312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.701970   73230 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.702095   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.708470   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.710216   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.711016   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.714706   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:09:07.844097   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.344174   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.843884   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.343591   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.843748   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.344148   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.844002   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.343424   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.444023   72441 kubeadm.go:1113] duration metric: took 4.331471016s to wait for elevateKubeSystemPrivileges
	I0906 20:09:11.444067   72441 kubeadm.go:394] duration metric: took 4m58.815096997s to StartCluster
	I0906 20:09:11.444093   72441 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.444186   72441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:11.446093   72441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.446360   72441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:11.446430   72441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:11.446521   72441 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-458066"
	I0906 20:09:11.446542   72441 addons.go:69] Setting default-storageclass=true in profile "embed-certs-458066"
	I0906 20:09:11.446560   72441 addons.go:69] Setting metrics-server=true in profile "embed-certs-458066"
	I0906 20:09:11.446609   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:11.446615   72441 addons.go:234] Setting addon metrics-server=true in "embed-certs-458066"
	W0906 20:09:11.446663   72441 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:11.446694   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.446576   72441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-458066"
	I0906 20:09:11.446570   72441 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-458066"
	W0906 20:09:11.446779   72441 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:11.446810   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.447077   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447112   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447170   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447211   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447350   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447426   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447879   72441 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:11.449461   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:11.463673   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I0906 20:09:11.463676   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0906 20:09:11.464129   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464231   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464669   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464691   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.464675   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464745   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.465097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465139   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465608   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465634   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.465731   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465778   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.466622   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0906 20:09:11.466967   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.467351   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.467366   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.467622   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.467759   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.471093   72441 addons.go:234] Setting addon default-storageclass=true in "embed-certs-458066"
	W0906 20:09:11.471115   72441 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:11.471145   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.471524   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.471543   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.488980   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0906 20:09:11.489014   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0906 20:09:11.489399   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0906 20:09:11.489465   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489517   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489908   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.490116   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490134   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490144   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490158   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490411   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490427   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490481   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490872   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490886   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.491406   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.491500   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.491520   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.491619   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.493485   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.493901   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.495272   72441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:11.495274   72441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:11.496553   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:11.496575   72441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:11.496597   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.496647   72441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.496667   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:11.496684   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.500389   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500395   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500469   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.500786   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500808   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500952   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501105   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.501145   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501259   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501305   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.501389   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501501   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.510188   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0906 20:09:11.510617   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.511142   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.511169   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.511539   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.511754   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.513207   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.513439   72441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.513455   72441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:11.513474   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.516791   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517292   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.517323   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517563   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.517898   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.518085   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.518261   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.669057   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:11.705086   72441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731651   72441 node_ready.go:49] node "embed-certs-458066" has status "Ready":"True"
	I0906 20:09:11.731679   72441 node_ready.go:38] duration metric: took 26.546983ms for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731691   72441 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:11.740680   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:11.767740   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:11.767760   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:11.771571   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.804408   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:11.804435   72441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:11.844160   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.856217   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:11.856240   72441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:11.899134   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:13.159543   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.315345353s)
	I0906 20:09:13.159546   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387931315s)
	I0906 20:09:13.159639   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159660   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159601   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159711   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.159985   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.159997   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160008   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160018   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160080   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160095   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160104   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160115   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160265   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160289   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160401   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160417   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185478   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.185512   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.185914   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.185934   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185949   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.228561   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.329382232s)
	I0906 20:09:13.228621   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.228636   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228924   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.228978   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.228991   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.229001   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.229229   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.229258   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.229270   72441 addons.go:475] Verifying addon metrics-server=true in "embed-certs-458066"
	I0906 20:09:13.230827   72441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:09.691281   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:11.692514   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:13.231988   72441 addons.go:510] duration metric: took 1.785558897s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:13.750043   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.247314   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.748039   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:16.748064   72441 pod_ready.go:82] duration metric: took 5.007352361s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:16.748073   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:14.192167   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.691856   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:18.754580   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:19.254643   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:19.254669   72441 pod_ready.go:82] duration metric: took 2.506589666s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:19.254680   72441 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762162   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.762188   72441 pod_ready.go:82] duration metric: took 1.507501384s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762202   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770835   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.770860   72441 pod_ready.go:82] duration metric: took 8.65029ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770872   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779692   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.779713   72441 pod_ready.go:82] duration metric: took 8.832607ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779725   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786119   72441 pod_ready.go:93] pod "kube-proxy-rzx2f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.786146   72441 pod_ready.go:82] duration metric: took 6.414063ms for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786158   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852593   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.852630   72441 pod_ready.go:82] duration metric: took 66.461213ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852642   72441 pod_ready.go:39] duration metric: took 9.120937234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:20.852663   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:20.852729   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:20.871881   72441 api_server.go:72] duration metric: took 9.425481233s to wait for apiserver process to appear ...
	I0906 20:09:20.871911   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:20.871927   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:09:20.876997   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:09:20.878290   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:20.878314   72441 api_server.go:131] duration metric: took 6.396943ms to wait for apiserver health ...
	I0906 20:09:20.878324   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:21.057265   72441 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:21.057303   72441 system_pods.go:61] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.057312   72441 system_pods.go:61] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.057319   72441 system_pods.go:61] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.057326   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.057332   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.057338   72441 system_pods.go:61] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.057345   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.057356   72441 system_pods.go:61] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.057367   72441 system_pods.go:61] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.057381   72441 system_pods.go:74] duration metric: took 179.050809ms to wait for pod list to return data ...
	I0906 20:09:21.057394   72441 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:21.252816   72441 default_sa.go:45] found service account: "default"
	I0906 20:09:21.252842   72441 default_sa.go:55] duration metric: took 195.436403ms for default service account to be created ...
	I0906 20:09:21.252851   72441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:21.455714   72441 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:21.455742   72441 system_pods.go:89] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.455748   72441 system_pods.go:89] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.455752   72441 system_pods.go:89] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.455755   72441 system_pods.go:89] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.455759   72441 system_pods.go:89] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.455763   72441 system_pods.go:89] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.455766   72441 system_pods.go:89] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.455772   72441 system_pods.go:89] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.455776   72441 system_pods.go:89] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.455784   72441 system_pods.go:126] duration metric: took 202.909491ms to wait for k8s-apps to be running ...
	I0906 20:09:21.455791   72441 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:21.455832   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.474124   72441 system_svc.go:56] duration metric: took 18.325386ms WaitForService to wait for kubelet
	I0906 20:09:21.474150   72441 kubeadm.go:582] duration metric: took 10.027757317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:21.474172   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:21.653674   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:21.653697   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:21.653708   72441 node_conditions.go:105] duration metric: took 179.531797ms to run NodePressure ...
	I0906 20:09:21.653718   72441 start.go:241] waiting for startup goroutines ...
	I0906 20:09:21.653727   72441 start.go:246] waiting for cluster config update ...
	I0906 20:09:21.653740   72441 start.go:255] writing updated cluster config ...
	I0906 20:09:21.654014   72441 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:21.703909   72441 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:21.705502   72441 out.go:177] * Done! kubectl is now configured to use "embed-certs-458066" cluster and "default" namespace by default
	I0906 20:09:21.102986   72867 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.269383553s)
	I0906 20:09:21.103094   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.118935   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:21.129099   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:21.139304   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:21.139326   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:21.139374   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:09:21.149234   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:21.149289   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:21.160067   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:09:21.169584   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:21.169664   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:21.179885   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.190994   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:21.191062   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.201649   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:09:21.211165   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:21.211223   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:21.220998   72867 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:21.269780   72867 kubeadm.go:310] W0906 20:09:21.240800    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.270353   72867 kubeadm.go:310] W0906 20:09:21.241533    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.389445   72867 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:18.692475   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:21.193075   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:23.697031   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:26.191208   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:28.192166   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:30.493468   72867 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:30.493543   72867 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:30.493620   72867 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:30.493751   72867 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:30.493891   72867 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:30.493971   72867 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:30.495375   72867 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:30.495467   72867 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:30.495537   72867 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:30.495828   72867 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:30.495913   72867 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:30.495977   72867 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:30.496024   72867 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:30.496112   72867 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:30.496207   72867 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:30.496308   72867 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:30.496400   72867 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:30.496452   72867 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:30.496519   72867 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:30.496601   72867 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:30.496690   72867 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:30.496774   72867 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:30.496887   72867 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:30.496946   72867 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:30.497018   72867 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:30.497074   72867 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:30.498387   72867 out.go:235]   - Booting up control plane ...
	I0906 20:09:30.498472   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:30.498550   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:30.498616   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:30.498715   72867 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:30.498786   72867 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:30.498821   72867 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:30.498969   72867 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:30.499076   72867 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:30.499126   72867 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.325552ms
	I0906 20:09:30.499189   72867 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:30.499269   72867 kubeadm.go:310] [api-check] The API server is healthy after 5.002261512s
	I0906 20:09:30.499393   72867 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:30.499507   72867 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:30.499586   72867 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:30.499818   72867 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:30.499915   72867 kubeadm.go:310] [bootstrap-token] Using token: 6yha4r.f9kcjkhkq2u0pp1e
	I0906 20:09:30.501217   72867 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:30.501333   72867 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:30.501438   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:30.501630   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:30.501749   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:30.501837   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:30.501904   72867 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:30.501996   72867 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:30.502032   72867 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:30.502085   72867 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:30.502093   72867 kubeadm.go:310] 
	I0906 20:09:30.502153   72867 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:30.502166   72867 kubeadm.go:310] 
	I0906 20:09:30.502242   72867 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:30.502257   72867 kubeadm.go:310] 
	I0906 20:09:30.502290   72867 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:30.502358   72867 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:30.502425   72867 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:30.502433   72867 kubeadm.go:310] 
	I0906 20:09:30.502486   72867 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:30.502494   72867 kubeadm.go:310] 
	I0906 20:09:30.502529   72867 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:30.502536   72867 kubeadm.go:310] 
	I0906 20:09:30.502575   72867 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:30.502633   72867 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:30.502706   72867 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:30.502720   72867 kubeadm.go:310] 
	I0906 20:09:30.502791   72867 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:30.502882   72867 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:30.502893   72867 kubeadm.go:310] 
	I0906 20:09:30.502982   72867 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503099   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:30.503120   72867 kubeadm.go:310] 	--control-plane 
	I0906 20:09:30.503125   72867 kubeadm.go:310] 
	I0906 20:09:30.503240   72867 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:30.503247   72867 kubeadm.go:310] 
	I0906 20:09:30.503312   72867 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503406   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:30.503416   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:09:30.503424   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:30.504880   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:30.505997   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:30.517864   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:30.539641   72867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:30.539731   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653828 minikube.k8s.io/updated_at=2024_09_06T20_09_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=default-k8s-diff-port-653828 minikube.k8s.io/primary=true
	I0906 20:09:30.539732   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.576812   72867 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:30.742163   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.242299   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.742502   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.192201   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.691488   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.242418   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:32.742424   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.242317   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.742587   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.242563   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.342481   72867 kubeadm.go:1113] duration metric: took 3.802829263s to wait for elevateKubeSystemPrivileges
	I0906 20:09:34.342520   72867 kubeadm.go:394] duration metric: took 5m1.826839653s to StartCluster
	I0906 20:09:34.342542   72867 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.342640   72867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:34.345048   72867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.345461   72867 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:34.345576   72867 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:34.345655   72867 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345691   72867 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653828"
	I0906 20:09:34.345696   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:34.345699   72867 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345712   72867 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345737   72867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653828"
	W0906 20:09:34.345703   72867 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:34.345752   72867 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.345762   72867 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:34.345779   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.345795   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.346102   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346136   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346174   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346195   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346231   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346201   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.347895   72867 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:34.349535   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:34.363021   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0906 20:09:34.363492   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.364037   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.364062   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.364463   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.365147   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.365186   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.365991   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
	I0906 20:09:34.366024   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0906 20:09:34.366472   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366512   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366953   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.366970   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367086   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.367113   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367494   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367642   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367988   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.368011   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.368282   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.375406   72867 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.375432   72867 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:34.375460   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.375825   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.375858   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.382554   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0906 20:09:34.383102   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.383600   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.383616   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.383938   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.384214   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.385829   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.387409   72867 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:34.388348   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:34.388366   72867 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:34.388381   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.392542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.392813   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.392828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.393018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.393068   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0906 20:09:34.393374   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.393439   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.393550   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.393686   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.394089   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.394116   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.394464   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.394651   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.396559   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.396712   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0906 20:09:34.397142   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.397646   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.397669   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.397929   72867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:34.398023   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.398468   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.398511   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.399007   72867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.399024   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:34.399043   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.405024   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405057   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.405081   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405287   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.405479   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.405634   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.405752   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.414779   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0906 20:09:34.415230   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.415662   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.415679   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.415993   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.416151   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.417818   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.418015   72867 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.418028   72867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:34.418045   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.421303   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421379   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.421399   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421645   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.421815   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.421979   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.422096   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.582923   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:34.600692   72867 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617429   72867 node_ready.go:49] node "default-k8s-diff-port-653828" has status "Ready":"True"
	I0906 20:09:34.617454   72867 node_ready.go:38] duration metric: took 16.723446ms for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617465   72867 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:34.632501   72867 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:34.679561   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.682999   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.746380   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:34.746406   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:34.876650   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:34.876680   72867 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:34.935388   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:34.935415   72867 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:35.092289   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:35.709257   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02965114s)
	I0906 20:09:35.709297   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026263795s)
	I0906 20:09:35.709352   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709373   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709319   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709398   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709810   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.709911   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709898   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709926   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.709954   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709962   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709876   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710029   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710047   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.710065   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.710226   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710238   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710636   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.710665   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710681   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754431   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.754458   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.754765   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.754781   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754821   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.181191   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:36.181219   72867 pod_ready.go:82] duration metric: took 1.54868366s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.181233   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.351617   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.259284594s)
	I0906 20:09:36.351684   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.351701   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.351992   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352078   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352100   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.352111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.352055   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352402   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352914   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352934   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352945   72867 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653828"
	I0906 20:09:36.354972   72867 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:36.356127   72867 addons.go:510] duration metric: took 2.010554769s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:34.695700   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:37.193366   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:38.187115   72867 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:39.188966   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:39.188998   72867 pod_ready.go:82] duration metric: took 3.007757042s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:39.189012   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:41.196228   72867 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.206614   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.206636   72867 pod_ready.go:82] duration metric: took 3.017616218s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.206647   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212140   72867 pod_ready.go:93] pod "kube-proxy-7846f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.212165   72867 pod_ready.go:82] duration metric: took 5.512697ms for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212174   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217505   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.217527   72867 pod_ready.go:82] duration metric: took 5.346748ms for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217534   72867 pod_ready.go:39] duration metric: took 7.600058293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:42.217549   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:42.217600   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:42.235961   72867 api_server.go:72] duration metric: took 7.890460166s to wait for apiserver process to appear ...
	I0906 20:09:42.235987   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:42.236003   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:09:42.240924   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:09:42.241889   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:42.241912   72867 api_server.go:131] duration metric: took 5.919055ms to wait for apiserver health ...
	I0906 20:09:42.241922   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:42.247793   72867 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:42.247825   72867 system_pods.go:61] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.247833   72867 system_pods.go:61] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.247839   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.247845   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.247852   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.247857   72867 system_pods.go:61] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.247861   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.247866   72867 system_pods.go:61] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.247873   72867 system_pods.go:61] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.247883   72867 system_pods.go:74] duration metric: took 5.95413ms to wait for pod list to return data ...
	I0906 20:09:42.247893   72867 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:42.251260   72867 default_sa.go:45] found service account: "default"
	I0906 20:09:42.251277   72867 default_sa.go:55] duration metric: took 3.3795ms for default service account to be created ...
	I0906 20:09:42.251284   72867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:42.256204   72867 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:42.256228   72867 system_pods.go:89] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.256233   72867 system_pods.go:89] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.256237   72867 system_pods.go:89] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.256241   72867 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.256245   72867 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.256249   72867 system_pods.go:89] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.256252   72867 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.256258   72867 system_pods.go:89] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.256261   72867 system_pods.go:89] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.256270   72867 system_pods.go:126] duration metric: took 4.981383ms to wait for k8s-apps to be running ...
	I0906 20:09:42.256278   72867 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:42.256323   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:42.272016   72867 system_svc.go:56] duration metric: took 15.727796ms WaitForService to wait for kubelet
	I0906 20:09:42.272050   72867 kubeadm.go:582] duration metric: took 7.926551396s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:42.272081   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:42.275486   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:42.275516   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:42.275527   72867 node_conditions.go:105] duration metric: took 3.439966ms to run NodePressure ...
	I0906 20:09:42.275540   72867 start.go:241] waiting for startup goroutines ...
	I0906 20:09:42.275548   72867 start.go:246] waiting for cluster config update ...
	I0906 20:09:42.275561   72867 start.go:255] writing updated cluster config ...
	I0906 20:09:42.275823   72867 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:42.326049   72867 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:42.328034   72867 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653828" cluster and "default" namespace by default
	I0906 20:09:39.692393   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.192176   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:44.691934   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:45.185317   72322 pod_ready.go:82] duration metric: took 4m0.000138495s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	E0906 20:09:45.185352   72322 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:09:45.185371   72322 pod_ready.go:39] duration metric: took 4m12.222584677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:45.185403   72322 kubeadm.go:597] duration metric: took 4m20.152442555s to restartPrimaryControlPlane
	W0906 20:09:45.185466   72322 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:45.185496   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:47.714239   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:09:47.714464   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:47.714711   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:09:52.715187   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:52.715391   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:02.716155   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:02.716424   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:11.446625   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261097398s)
	I0906 20:10:11.446717   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:11.472899   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:10:11.492643   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:10:11.509855   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:10:11.509878   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:10:11.509933   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:10:11.523039   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:10:11.523099   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:10:11.540484   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:10:11.560246   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:10:11.560323   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:10:11.585105   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.596067   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:10:11.596138   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.607049   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:10:11.616982   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:10:11.617058   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:10:11.627880   72322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:10:11.672079   72322 kubeadm.go:310] W0906 20:10:11.645236    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.672935   72322 kubeadm.go:310] W0906 20:10:11.646151    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.789722   72322 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:10:20.270339   72322 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:10:20.270450   72322 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:10:20.270551   72322 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:10:20.270697   72322 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:10:20.270837   72322 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:10:20.270932   72322 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:10:20.272324   72322 out.go:235]   - Generating certificates and keys ...
	I0906 20:10:20.272437   72322 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:10:20.272530   72322 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:10:20.272634   72322 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:10:20.272732   72322 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:10:20.272842   72322 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:10:20.272950   72322 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:10:20.273051   72322 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:10:20.273135   72322 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:10:20.273272   72322 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:10:20.273361   72322 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:10:20.273400   72322 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:10:20.273456   72322 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:10:20.273517   72322 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:10:20.273571   72322 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:10:20.273625   72322 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:10:20.273682   72322 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:10:20.273731   72322 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:10:20.273801   72322 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:10:20.273856   72322 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:10:20.275359   72322 out.go:235]   - Booting up control plane ...
	I0906 20:10:20.275466   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:10:20.275539   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:10:20.275595   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:10:20.275692   72322 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:10:20.275774   72322 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:10:20.275812   72322 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:10:20.275917   72322 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:10:20.276005   72322 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:10:20.276063   72322 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001365031s
	I0906 20:10:20.276127   72322 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:10:20.276189   72322 kubeadm.go:310] [api-check] The API server is healthy after 5.002810387s
	I0906 20:10:20.276275   72322 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:10:20.276410   72322 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:10:20.276480   72322 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:10:20.276639   72322 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-504385 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:10:20.276690   72322 kubeadm.go:310] [bootstrap-token] Using token: fv12w2.cc6vcthx5yn6r6ru
	I0906 20:10:20.277786   72322 out.go:235]   - Configuring RBAC rules ...
	I0906 20:10:20.277872   72322 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:10:20.277941   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:10:20.278082   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:10:20.278231   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:10:20.278351   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:10:20.278426   72322 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:10:20.278541   72322 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:10:20.278614   72322 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:10:20.278692   72322 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:10:20.278700   72322 kubeadm.go:310] 
	I0906 20:10:20.278780   72322 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:10:20.278790   72322 kubeadm.go:310] 
	I0906 20:10:20.278880   72322 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:10:20.278889   72322 kubeadm.go:310] 
	I0906 20:10:20.278932   72322 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:10:20.279023   72322 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:10:20.279079   72322 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:10:20.279086   72322 kubeadm.go:310] 
	I0906 20:10:20.279141   72322 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:10:20.279148   72322 kubeadm.go:310] 
	I0906 20:10:20.279186   72322 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:10:20.279195   72322 kubeadm.go:310] 
	I0906 20:10:20.279291   72322 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:10:20.279420   72322 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:10:20.279524   72322 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:10:20.279535   72322 kubeadm.go:310] 
	I0906 20:10:20.279647   72322 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:10:20.279756   72322 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:10:20.279767   72322 kubeadm.go:310] 
	I0906 20:10:20.279896   72322 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280043   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:10:20.280080   72322 kubeadm.go:310] 	--control-plane 
	I0906 20:10:20.280090   72322 kubeadm.go:310] 
	I0906 20:10:20.280230   72322 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:10:20.280258   72322 kubeadm.go:310] 
	I0906 20:10:20.280365   72322 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280514   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:10:20.280532   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:10:20.280541   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:10:20.282066   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:10:20.283228   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:10:20.294745   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:10:20.317015   72322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-504385 minikube.k8s.io/updated_at=2024_09_06T20_10_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=no-preload-504385 minikube.k8s.io/primary=true
	I0906 20:10:20.528654   72322 ops.go:34] apiserver oom_adj: -16
	I0906 20:10:20.528681   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.029394   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.528922   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.029667   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.528814   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.029163   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.529709   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.029277   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.529466   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.668636   72322 kubeadm.go:1113] duration metric: took 4.351557657s to wait for elevateKubeSystemPrivileges
	I0906 20:10:24.668669   72322 kubeadm.go:394] duration metric: took 4m59.692142044s to StartCluster
	I0906 20:10:24.668690   72322 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.668775   72322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:10:24.670483   72322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.670765   72322 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:10:24.670874   72322 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:10:24.670975   72322 addons.go:69] Setting storage-provisioner=true in profile "no-preload-504385"
	I0906 20:10:24.670990   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:10:24.671015   72322 addons.go:234] Setting addon storage-provisioner=true in "no-preload-504385"
	W0906 20:10:24.671027   72322 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:10:24.670988   72322 addons.go:69] Setting default-storageclass=true in profile "no-preload-504385"
	I0906 20:10:24.671020   72322 addons.go:69] Setting metrics-server=true in profile "no-preload-504385"
	I0906 20:10:24.671053   72322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-504385"
	I0906 20:10:24.671069   72322 addons.go:234] Setting addon metrics-server=true in "no-preload-504385"
	I0906 20:10:24.671057   72322 host.go:66] Checking if "no-preload-504385" exists ...
	W0906 20:10:24.671080   72322 addons.go:243] addon metrics-server should already be in state true
	I0906 20:10:24.671112   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.671387   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671413   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671433   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671462   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671476   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671509   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.672599   72322 out.go:177] * Verifying Kubernetes components...
	I0906 20:10:24.674189   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:10:24.688494   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0906 20:10:24.689082   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.689564   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.689586   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.690020   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.690242   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.691753   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0906 20:10:24.691758   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0906 20:10:24.692223   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692314   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692744   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692761   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.692892   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692912   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.693162   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693498   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693821   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.693851   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694035   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694067   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694118   72322 addons.go:234] Setting addon default-storageclass=true in "no-preload-504385"
	W0906 20:10:24.694133   72322 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:10:24.694159   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.694503   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694533   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.710695   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0906 20:10:24.712123   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.712820   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.712844   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.713265   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.713488   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.714238   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0906 20:10:24.714448   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0906 20:10:24.714584   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.714801   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.715454   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715472   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715517   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.715631   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715643   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715961   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716468   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716527   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.717120   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.717170   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.717534   72322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:10:24.718838   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.719392   72322 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:24.719413   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:10:24.719435   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.720748   72322 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:10:22.717567   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:22.717827   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:24.722045   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:10:24.722066   72322 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:10:24.722084   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.722722   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723383   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.723408   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723545   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.723788   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.723970   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.724133   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.725538   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.725987   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.726006   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.726137   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.726317   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.726499   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.726629   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.734236   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0906 20:10:24.734597   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.735057   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.735069   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.735479   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.735612   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.737446   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.737630   72322 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:24.737647   72322 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:10:24.737658   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.740629   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741040   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.741063   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741251   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.741418   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.741530   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.741659   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.903190   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:10:24.944044   72322 node_ready.go:35] waiting up to 6m0s for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960395   72322 node_ready.go:49] node "no-preload-504385" has status "Ready":"True"
	I0906 20:10:24.960436   72322 node_ready.go:38] duration metric: took 16.357022ms for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960453   72322 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:24.981153   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:25.103072   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:25.113814   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:10:25.113843   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:10:25.123206   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:25.209178   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:10:25.209208   72322 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:10:25.255577   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.255604   72322 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:10:25.297179   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.336592   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336615   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.336915   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.336930   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.336938   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336945   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.337164   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.337178   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.350330   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.350356   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.350630   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.350648   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850349   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850377   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850688   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.850707   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850717   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850725   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850974   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.851012   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.033886   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.033918   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034215   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034221   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034241   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034250   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.034258   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034525   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034533   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034579   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034593   72322 addons.go:475] Verifying addon metrics-server=true in "no-preload-504385"
	I0906 20:10:26.036358   72322 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0906 20:10:26.037927   72322 addons.go:510] duration metric: took 1.367055829s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0906 20:10:26.989945   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:28.987386   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:28.987407   72322 pod_ready.go:82] duration metric: took 4.006228588s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:28.987419   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:30.994020   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:32.999308   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:32.999332   72322 pod_ready.go:82] duration metric: took 4.01190401s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:32.999344   72322 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005872   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.005898   72322 pod_ready.go:82] duration metric: took 1.006546878s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005908   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010279   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.010306   72322 pod_ready.go:82] duration metric: took 4.391154ms for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010315   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014331   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.014346   72322 pod_ready.go:82] duration metric: took 4.025331ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014354   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018361   72322 pod_ready.go:93] pod "kube-proxy-48s2x" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.018378   72322 pod_ready.go:82] duration metric: took 4.018525ms for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018386   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191606   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.191630   72322 pod_ready.go:82] duration metric: took 173.23777ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191638   72322 pod_ready.go:39] duration metric: took 9.231173272s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:34.191652   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:10:34.191738   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:10:34.207858   72322 api_server.go:72] duration metric: took 9.537052258s to wait for apiserver process to appear ...
	I0906 20:10:34.207883   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:10:34.207904   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:10:34.214477   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:10:34.216178   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:10:34.216211   72322 api_server.go:131] duration metric: took 8.319856ms to wait for apiserver health ...
	I0906 20:10:34.216221   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:10:34.396409   72322 system_pods.go:59] 9 kube-system pods found
	I0906 20:10:34.396443   72322 system_pods.go:61] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.396451   72322 system_pods.go:61] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.396456   72322 system_pods.go:61] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.396461   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.396468   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.396472   72322 system_pods.go:61] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.396477   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.396487   72322 system_pods.go:61] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.396502   72322 system_pods.go:61] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.396514   72322 system_pods.go:74] duration metric: took 180.284785ms to wait for pod list to return data ...
	I0906 20:10:34.396526   72322 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:10:34.592160   72322 default_sa.go:45] found service account: "default"
	I0906 20:10:34.592186   72322 default_sa.go:55] duration metric: took 195.651674ms for default service account to be created ...
	I0906 20:10:34.592197   72322 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:10:34.795179   72322 system_pods.go:86] 9 kube-system pods found
	I0906 20:10:34.795210   72322 system_pods.go:89] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.795217   72322 system_pods.go:89] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.795221   72322 system_pods.go:89] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.795224   72322 system_pods.go:89] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.795228   72322 system_pods.go:89] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.795232   72322 system_pods.go:89] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.795238   72322 system_pods.go:89] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.795244   72322 system_pods.go:89] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.795249   72322 system_pods.go:89] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.795258   72322 system_pods.go:126] duration metric: took 203.05524ms to wait for k8s-apps to be running ...
	I0906 20:10:34.795270   72322 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:10:34.795328   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:34.810406   72322 system_svc.go:56] duration metric: took 15.127486ms WaitForService to wait for kubelet
	I0906 20:10:34.810437   72322 kubeadm.go:582] duration metric: took 10.13963577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:10:34.810461   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:10:34.993045   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:10:34.993077   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:10:34.993092   72322 node_conditions.go:105] duration metric: took 182.626456ms to run NodePressure ...
	I0906 20:10:34.993105   72322 start.go:241] waiting for startup goroutines ...
	I0906 20:10:34.993112   72322 start.go:246] waiting for cluster config update ...
	I0906 20:10:34.993122   72322 start.go:255] writing updated cluster config ...
	I0906 20:10:34.993401   72322 ssh_runner.go:195] Run: rm -f paused
	I0906 20:10:35.043039   72322 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:10:35.045782   72322 out.go:177] * Done! kubectl is now configured to use "no-preload-504385" cluster and "default" namespace by default
	I0906 20:11:02.719781   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:02.720062   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:02.720077   73230 kubeadm.go:310] 
	I0906 20:11:02.720125   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:11:02.720177   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:11:02.720189   73230 kubeadm.go:310] 
	I0906 20:11:02.720246   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:11:02.720290   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:11:02.720443   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:11:02.720469   73230 kubeadm.go:310] 
	I0906 20:11:02.720593   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:11:02.720665   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:11:02.720722   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:11:02.720746   73230 kubeadm.go:310] 
	I0906 20:11:02.720900   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:11:02.721018   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:11:02.721028   73230 kubeadm.go:310] 
	I0906 20:11:02.721180   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:11:02.721311   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:11:02.721405   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:11:02.721500   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:11:02.721512   73230 kubeadm.go:310] 
	I0906 20:11:02.722088   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:11:02.722199   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:11:02.722310   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0906 20:11:02.722419   73230 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 20:11:02.722469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:11:03.188091   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:11:03.204943   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:11:03.215434   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:11:03.215458   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:11:03.215506   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:11:03.225650   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:11:03.225713   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:11:03.236252   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:11:03.245425   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:11:03.245489   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:11:03.255564   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.264932   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:11:03.265014   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.274896   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:11:03.284027   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:11:03.284092   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:11:03.294368   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:11:03.377411   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:11:03.377509   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:11:03.537331   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:11:03.537590   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:11:03.537722   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:11:03.728458   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:11:03.730508   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:11:03.730621   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:11:03.730720   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:11:03.730869   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:11:03.730984   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:11:03.731082   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:11:03.731167   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:11:03.731258   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:11:03.731555   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:11:03.731896   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:11:03.732663   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:11:03.732953   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:11:03.733053   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:11:03.839927   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:11:03.988848   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:11:04.077497   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:11:04.213789   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:11:04.236317   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:11:04.237625   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:11:04.237719   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:11:04.399036   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:11:04.400624   73230 out.go:235]   - Booting up control plane ...
	I0906 20:11:04.400709   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:11:04.401417   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:11:04.402751   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:11:04.404122   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:11:04.407817   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:11:44.410273   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:11:44.410884   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:44.411132   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:49.411428   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:49.411674   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:59.412917   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:59.413182   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:19.414487   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:19.414692   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415457   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:59.415729   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415750   73230 kubeadm.go:310] 
	I0906 20:12:59.415808   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:12:59.415864   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:12:59.415874   73230 kubeadm.go:310] 
	I0906 20:12:59.415933   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:12:59.415979   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:12:59.416147   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:12:59.416167   73230 kubeadm.go:310] 
	I0906 20:12:59.416332   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:12:59.416372   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:12:59.416420   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:12:59.416428   73230 kubeadm.go:310] 
	I0906 20:12:59.416542   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:12:59.416650   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:12:59.416659   73230 kubeadm.go:310] 
	I0906 20:12:59.416818   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:12:59.416928   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:12:59.417030   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:12:59.417139   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:12:59.417153   73230 kubeadm.go:310] 
	I0906 20:12:59.417400   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:12:59.417485   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:12:59.417559   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 20:12:59.417626   73230 kubeadm.go:394] duration metric: took 8m3.018298427s to StartCluster
	I0906 20:12:59.417673   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:12:59.417741   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:12:59.464005   73230 cri.go:89] found id: ""
	I0906 20:12:59.464033   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.464040   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:12:59.464045   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:12:59.464101   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:12:59.504218   73230 cri.go:89] found id: ""
	I0906 20:12:59.504252   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.504264   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:12:59.504271   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:12:59.504327   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:12:59.541552   73230 cri.go:89] found id: ""
	I0906 20:12:59.541579   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.541589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:12:59.541596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:12:59.541663   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:12:59.580135   73230 cri.go:89] found id: ""
	I0906 20:12:59.580158   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.580168   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:12:59.580174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:12:59.580220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:12:59.622453   73230 cri.go:89] found id: ""
	I0906 20:12:59.622486   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.622498   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:12:59.622518   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:12:59.622587   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:12:59.661561   73230 cri.go:89] found id: ""
	I0906 20:12:59.661590   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.661601   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:12:59.661608   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:12:59.661668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:12:59.695703   73230 cri.go:89] found id: ""
	I0906 20:12:59.695732   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.695742   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:12:59.695749   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:12:59.695808   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:12:59.739701   73230 cri.go:89] found id: ""
	I0906 20:12:59.739733   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.739744   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:12:59.739756   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:12:59.739771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:12:59.791400   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:12:59.791428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:12:59.851142   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:12:59.851179   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:12:59.867242   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:12:59.867278   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:12:59.941041   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:12:59.941060   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:12:59.941071   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0906 20:13:00.061377   73230 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 20:13:00.061456   73230 out.go:270] * 
	W0906 20:13:00.061515   73230 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.061532   73230 out.go:270] * 
	W0906 20:13:00.062343   73230 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:13:00.065723   73230 out.go:201] 
	W0906 20:13:00.066968   73230 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.067028   73230 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 20:13:00.067059   73230 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 20:13:00.068497   73230 out.go:201] 
	
	
	==> CRI-O <==
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.231806765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654125231718392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b79204bb-9f2d-4a1e-9603-ae9dbd9edd08 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.232468100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4037d5b9-fb4d-47c7-aa18-6a434162d782 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.232527690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4037d5b9-fb4d-47c7-aa18-6a434162d782 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.232563348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4037d5b9-fb4d-47c7-aa18-6a434162d782 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.268568213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36e08053-aeab-486d-8c1b-c09b318e04af name=/runtime.v1.RuntimeService/Version
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.268648935Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36e08053-aeab-486d-8c1b-c09b318e04af name=/runtime.v1.RuntimeService/Version
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.269867767Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55eb4836-f7e7-4f6c-b9d0-b50539d7e5cf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.270235884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654125270214091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55eb4836-f7e7-4f6c-b9d0-b50539d7e5cf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.271245814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfae7132-f7e0-42bc-8b1d-a693a1c21277 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.271295294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfae7132-f7e0-42bc-8b1d-a693a1c21277 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.271331713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cfae7132-f7e0-42bc-8b1d-a693a1c21277 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.303085015Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fc0039e-3d2f-4949-b667-f392cae80630 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.303162077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fc0039e-3d2f-4949-b667-f392cae80630 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.304206976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=022a5f9c-66e6-4f9e-bd08-6fc5ac9baf65 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.304600025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654125304576997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=022a5f9c-66e6-4f9e-bd08-6fc5ac9baf65 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.305261553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1936459-0c15-4d4a-b676-975af9f45e6e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.305313894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1936459-0c15-4d4a-b676-975af9f45e6e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.305346697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f1936459-0c15-4d4a-b676-975af9f45e6e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.337784122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40ed3f2a-49d8-4678-8101-f558ebb96a22 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.337863786Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40ed3f2a-49d8-4678-8101-f558ebb96a22 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.339338701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c82f80b6-6758-4eca-a983-a4751b1699e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.339813269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654125339713148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c82f80b6-6758-4eca-a983-a4751b1699e1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.340355805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83012eab-0399-4531-b38f-46fcb7044a8b name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.340429152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83012eab-0399-4531-b38f-46fcb7044a8b name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:22:05 old-k8s-version-843298 crio[630]: time="2024-09-06 20:22:05.340477538Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=83012eab-0399-4531-b38f-46fcb7044a8b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 6 20:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050933] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039157] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.987920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.571048] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.647123] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.681954] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.060444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073389] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.178170] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.167558] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.279257] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.753089] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.068747] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.083570] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[Sep 6 20:05] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 6 20:09] systemd-fstab-generator[5052]: Ignoring "noauto" option for root device
	[Sep 6 20:11] systemd-fstab-generator[5331]: Ignoring "noauto" option for root device
	[  +0.061919] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:22:05 up 17 min,  0 users,  load average: 0.08, 0.03, 0.05
	Linux old-k8s-version-843298 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc0005e5b90)
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]: goroutine 152 [select]:
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009a1ef0, 0x4f0ac20, 0xc000119f90, 0x1, 0xc0001000c0)
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0009882a0, 0xc0001000c0)
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000499390, 0xc0009bc100)
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6500]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 06 20:22:00 old-k8s-version-843298 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 06 20:22:00 old-k8s-version-843298 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 06 20:22:00 old-k8s-version-843298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Sep 06 20:22:00 old-k8s-version-843298 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 06 20:22:00 old-k8s-version-843298 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6509]: I0906 20:22:00.788009    6509 server.go:416] Version: v1.20.0
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6509]: I0906 20:22:00.788383    6509 server.go:837] Client rotation is on, will bootstrap in background
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6509]: I0906 20:22:00.791613    6509 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6509]: W0906 20:22:00.792959    6509 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 06 20:22:00 old-k8s-version-843298 kubelet[6509]: I0906 20:22:00.793984    6509 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 2 (246.848599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-843298" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (429.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-458066 -n embed-certs-458066
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-06 20:25:33.365101932 +0000 UTC m=+6979.854855463
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-458066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-458066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.402µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-458066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-458066 -n embed-certs-458066
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-458066 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-458066 logs -n 25: (1.314257161s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-859361 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | disable-driver-mounts-859361                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:57 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-504385             | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-458066            | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653828  | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC | 06 Sep 24 19:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC |                     |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-504385                  | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-458066                 | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-843298        | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653828       | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-843298             | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:24 UTC | 06 Sep 24 20:24 UTC |
	| start   | -p newest-cni-113806 --memory=2200 --alsologtostderr   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:24 UTC | 06 Sep 24 20:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 20:24 UTC | 06 Sep 24 20:24 UTC |
	| addons  | enable metrics-server -p newest-cni-113806             | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:25 UTC | 06 Sep 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-113806                                   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:25 UTC | 06 Sep 24 20:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-113806                  | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:25 UTC | 06 Sep 24 20:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-113806 --memory=2200 --alsologtostderr   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:25 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 20:25:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:25:32.073827   80582 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:25:32.073954   80582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:25:32.073963   80582 out.go:358] Setting ErrFile to fd 2...
	I0906 20:25:32.073968   80582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:25:32.074174   80582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:25:32.074734   80582 out.go:352] Setting JSON to false
	I0906 20:25:32.075693   80582 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7681,"bootTime":1725646651,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:25:32.075752   80582 start.go:139] virtualization: kvm guest
	I0906 20:25:32.078125   80582 out.go:177] * [newest-cni-113806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:25:32.079359   80582 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:25:32.079367   80582 notify.go:220] Checking for updates...
	I0906 20:25:32.080786   80582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:25:32.082192   80582 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:25:32.083261   80582 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:25:32.084554   80582 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:25:32.085765   80582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:25:32.087207   80582 config.go:182] Loaded profile config "newest-cni-113806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:25:32.087653   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:25:32.087723   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:25:32.103007   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0906 20:25:32.103350   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:25:32.103930   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:25:32.103958   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:25:32.104254   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:25:32.104469   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:32.104730   80582 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:25:32.105156   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:25:32.105198   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:25:32.121545   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0906 20:25:32.121956   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:25:32.122415   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:25:32.122450   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:25:32.122788   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:25:32.122997   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:32.160138   80582 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 20:25:32.161400   80582 start.go:297] selected driver: kvm2
	I0906 20:25:32.161415   80582 start.go:901] validating driver "kvm2" against &{Name:newest-cni-113806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-113806 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:25:32.161520   80582 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:25:32.162198   80582 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:25:32.162267   80582 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:25:32.178230   80582 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:25:32.178660   80582 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 20:25:32.178700   80582 cni.go:84] Creating CNI manager for ""
	I0906 20:25:32.178711   80582 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:25:32.178760   80582 start.go:340] cluster config:
	{Name:newest-cni-113806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-113806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:25:32.178898   80582 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:25:32.180636   80582 out.go:177] * Starting "newest-cni-113806" primary control-plane node in "newest-cni-113806" cluster
	I0906 20:25:32.181601   80582 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:25:32.181633   80582 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:25:32.181645   80582 cache.go:56] Caching tarball of preloaded images
	I0906 20:25:32.181727   80582 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:25:32.181746   80582 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 20:25:32.181834   80582 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/config.json ...
	I0906 20:25:32.182005   80582 start.go:360] acquireMachinesLock for newest-cni-113806: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:25:32.182044   80582 start.go:364] duration metric: took 22.052µs to acquireMachinesLock for "newest-cni-113806"
	I0906 20:25:32.182062   80582 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:25:32.182071   80582 fix.go:54] fixHost starting: 
	I0906 20:25:32.182326   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:25:32.182359   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:25:32.197736   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I0906 20:25:32.198109   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:25:32.198588   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:25:32.198614   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:25:32.198954   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:25:32.199150   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:32.199343   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetState
	I0906 20:25:32.201206   80582 fix.go:112] recreateIfNeeded on newest-cni-113806: state=Stopped err=<nil>
	I0906 20:25:32.201251   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	W0906 20:25:32.201413   80582 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:25:32.203758   80582 out.go:177] * Restarting existing kvm2 VM for "newest-cni-113806" ...
	
	
	==> CRI-O <==
	Sep 06 20:25:33 embed-certs-458066 crio[708]: time="2024-09-06 20:25:33.968401467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654333968374416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dad08bc4-3e5f-4167-8ec4-365589e39e83 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:25:33 embed-certs-458066 crio[708]: time="2024-09-06 20:25:33.969187757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81de6068-67c4-40aa-b605-4586691af342 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:33 embed-certs-458066 crio[708]: time="2024-09-06 20:25:33.969245413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81de6068-67c4-40aa-b605-4586691af342 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:33 embed-certs-458066 crio[708]: time="2024-09-06 20:25:33.969484726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea,PodSandboxId:ed8b0ac0ccfab7815363049809c3b4d30150855a7effa6522b8e64e7a0abb248,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653353617080301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51644de2-a533-44ec-8e7e-4842e80a896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8,PodSandboxId:88db6addd475cc2829a38b167389c9a5fd92e007133f926fb1f77511e57bd0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653353045978239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-br45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9992e3-3e5f-437d-90e0-b1087dca42e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204,PodSandboxId:980c51c1efd8837a63f4de3db4b86192367e7d8fd78b00db87351c867c895fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653352927545519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gtlxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
806a981-e9dc-46ec-b440-94ea611c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70,PodSandboxId:d5f43957a49278340fcb415e458015e5299fc1a163728abd2c58b2033c4c7b0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1725653352090981196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzx2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e52ab6-7d95-4a7a-acfa-66bbc748d1db,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83,PodSandboxId:a0209b1658f52d092727669d36fcfc48b6e96a1b37d322083df75347c56f63f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653341088003524,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7b22e239d297d4a55de7cc9009cb12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e,PodSandboxId:7adbf46638a3d35a4b64d98021e9e72559a95abe03a7c3b94b28e44ebcb862a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653341112222954,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b658c82eb54e0d4714bca5ecca195e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672,PodSandboxId:d4878d4eed5725cfba45f4570d494a40f897e1f9f97006eabb3bc2ebf3929027,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653341080464863,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0,PodSandboxId:4db3c3f431502ba8a5d68f13b529f6e72e809cd684591e80d0f3d90afbd8b79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653340964873531,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f21fbbd8883e745450e735168ec000,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d,PodSandboxId:76d622d673f720b37ba2d548dfa02bdc922e9b7fb74ef36e6b7090d1af4a88bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653055116048356,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81de6068-67c4-40aa-b605-4586691af342 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.011663439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4362e48-dcd3-4dec-868f-5f2009e009ac name=/runtime.v1.RuntimeService/Version
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.011834123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4362e48-dcd3-4dec-868f-5f2009e009ac name=/runtime.v1.RuntimeService/Version
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.013100403Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a247454c-cffc-4463-bbf0-58cdfd7a580f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.013510842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654334013488289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a247454c-cffc-4463-bbf0-58cdfd7a580f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.014072822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd42a566-986a-40f5-a90c-54028c9b8a7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.014124738Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd42a566-986a-40f5-a90c-54028c9b8a7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.014318462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea,PodSandboxId:ed8b0ac0ccfab7815363049809c3b4d30150855a7effa6522b8e64e7a0abb248,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653353617080301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51644de2-a533-44ec-8e7e-4842e80a896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8,PodSandboxId:88db6addd475cc2829a38b167389c9a5fd92e007133f926fb1f77511e57bd0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653353045978239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-br45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9992e3-3e5f-437d-90e0-b1087dca42e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204,PodSandboxId:980c51c1efd8837a63f4de3db4b86192367e7d8fd78b00db87351c867c895fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653352927545519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gtlxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
806a981-e9dc-46ec-b440-94ea611c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70,PodSandboxId:d5f43957a49278340fcb415e458015e5299fc1a163728abd2c58b2033c4c7b0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1725653352090981196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzx2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e52ab6-7d95-4a7a-acfa-66bbc748d1db,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83,PodSandboxId:a0209b1658f52d092727669d36fcfc48b6e96a1b37d322083df75347c56f63f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653341088003524,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7b22e239d297d4a55de7cc9009cb12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e,PodSandboxId:7adbf46638a3d35a4b64d98021e9e72559a95abe03a7c3b94b28e44ebcb862a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653341112222954,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b658c82eb54e0d4714bca5ecca195e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672,PodSandboxId:d4878d4eed5725cfba45f4570d494a40f897e1f9f97006eabb3bc2ebf3929027,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653341080464863,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0,PodSandboxId:4db3c3f431502ba8a5d68f13b529f6e72e809cd684591e80d0f3d90afbd8b79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653340964873531,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f21fbbd8883e745450e735168ec000,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d,PodSandboxId:76d622d673f720b37ba2d548dfa02bdc922e9b7fb74ef36e6b7090d1af4a88bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653055116048356,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd42a566-986a-40f5-a90c-54028c9b8a7e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.062050148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b13e9a45-16ac-491f-8aeb-e42015d09fad name=/runtime.v1.RuntimeService/Version
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.062125384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b13e9a45-16ac-491f-8aeb-e42015d09fad name=/runtime.v1.RuntimeService/Version
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.063389650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1578ad0-5af9-4a30-9190-9e12856b7262 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.064047802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654334064019233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1578ad0-5af9-4a30-9190-9e12856b7262 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.065266230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=352aed76-8678-4d9c-bf0d-de0ae430d519 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.065353853Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=352aed76-8678-4d9c-bf0d-de0ae430d519 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.065548252Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea,PodSandboxId:ed8b0ac0ccfab7815363049809c3b4d30150855a7effa6522b8e64e7a0abb248,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653353617080301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51644de2-a533-44ec-8e7e-4842e80a896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8,PodSandboxId:88db6addd475cc2829a38b167389c9a5fd92e007133f926fb1f77511e57bd0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653353045978239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-br45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9992e3-3e5f-437d-90e0-b1087dca42e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204,PodSandboxId:980c51c1efd8837a63f4de3db4b86192367e7d8fd78b00db87351c867c895fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653352927545519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gtlxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
806a981-e9dc-46ec-b440-94ea611c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70,PodSandboxId:d5f43957a49278340fcb415e458015e5299fc1a163728abd2c58b2033c4c7b0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1725653352090981196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzx2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e52ab6-7d95-4a7a-acfa-66bbc748d1db,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83,PodSandboxId:a0209b1658f52d092727669d36fcfc48b6e96a1b37d322083df75347c56f63f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653341088003524,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7b22e239d297d4a55de7cc9009cb12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e,PodSandboxId:7adbf46638a3d35a4b64d98021e9e72559a95abe03a7c3b94b28e44ebcb862a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653341112222954,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b658c82eb54e0d4714bca5ecca195e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672,PodSandboxId:d4878d4eed5725cfba45f4570d494a40f897e1f9f97006eabb3bc2ebf3929027,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653341080464863,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0,PodSandboxId:4db3c3f431502ba8a5d68f13b529f6e72e809cd684591e80d0f3d90afbd8b79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653340964873531,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f21fbbd8883e745450e735168ec000,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d,PodSandboxId:76d622d673f720b37ba2d548dfa02bdc922e9b7fb74ef36e6b7090d1af4a88bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653055116048356,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=352aed76-8678-4d9c-bf0d-de0ae430d519 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.102326195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7532afff-a667-47f5-8d0e-8467a620031b name=/runtime.v1.RuntimeService/Version
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.102416560Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7532afff-a667-47f5-8d0e-8467a620031b name=/runtime.v1.RuntimeService/Version
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.103747387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fc469d9-f1eb-4803-b68f-891b3f9dbc84 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.104301715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654334104273192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fc469d9-f1eb-4803-b68f-891b3f9dbc84 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.105034577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e188b76-57fe-4693-9ba7-e8d146a45315 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.105224613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e188b76-57fe-4693-9ba7-e8d146a45315 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:25:34 embed-certs-458066 crio[708]: time="2024-09-06 20:25:34.105462959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea,PodSandboxId:ed8b0ac0ccfab7815363049809c3b4d30150855a7effa6522b8e64e7a0abb248,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653353617080301,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51644de2-a533-44ec-8e7e-4842e80a896e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8,PodSandboxId:88db6addd475cc2829a38b167389c9a5fd92e007133f926fb1f77511e57bd0e2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653353045978239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-br45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de9992e3-3e5f-437d-90e0-b1087dca42e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204,PodSandboxId:980c51c1efd8837a63f4de3db4b86192367e7d8fd78b00db87351c867c895fe7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653352927545519,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-gtlxq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
806a981-e9dc-46ec-b440-94ea611c8d27,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70,PodSandboxId:d5f43957a49278340fcb415e458015e5299fc1a163728abd2c58b2033c4c7b0d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1725653352090981196,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzx2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e52ab6-7d95-4a7a-acfa-66bbc748d1db,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83,PodSandboxId:a0209b1658f52d092727669d36fcfc48b6e96a1b37d322083df75347c56f63f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653341088003524,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7b22e239d297d4a55de7cc9009cb12,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e,PodSandboxId:7adbf46638a3d35a4b64d98021e9e72559a95abe03a7c3b94b28e44ebcb862a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653341112222954,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05b658c82eb54e0d4714bca5ecca195e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672,PodSandboxId:d4878d4eed5725cfba45f4570d494a40f897e1f9f97006eabb3bc2ebf3929027,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653341080464863,Labels:map[string]string{io.kubernetes.container.name: kube-api
server,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0,PodSandboxId:4db3c3f431502ba8a5d68f13b529f6e72e809cd684591e80d0f3d90afbd8b79c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653340964873531,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4f21fbbd8883e745450e735168ec000,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d,PodSandboxId:76d622d673f720b37ba2d548dfa02bdc922e9b7fb74ef36e6b7090d1af4a88bd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653055116048356,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-458066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e0606625595b663ccf1f8febc65d8d9,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e188b76-57fe-4693-9ba7-e8d146a45315 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	20a310412e4fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   ed8b0ac0ccfab       storage-provisioner
	5dca79959ab05       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   88db6addd475c       coredns-6f6b679f8f-br45p
	fd5c25ddf467f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   980c51c1efd88       coredns-6f6b679f8f-gtlxq
	f743811765445       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   d5f43957a4927       kube-proxy-rzx2f
	5f30c0a5d7a13       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   7adbf46638a3d       etcd-embed-certs-458066
	0967ba02d3556       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   a0209b1658f52       kube-scheduler-embed-certs-458066
	3c4dcf1da46f8       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   d4878d4eed572       kube-apiserver-embed-certs-458066
	0a869559af2c6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   4db3c3f431502       kube-controller-manager-embed-certs-458066
	6b9354d01c92c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   76d622d673f72       kube-apiserver-embed-certs-458066
	
	
	==> coredns [5dca79959ab052931eb8dfe83b403032d1dc8cb5cd45d5c9558c1acef26a20a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fd5c25ddf467f94545e934f86a69678236422b4aabc5bb7c79a7d2c178cc6204] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-458066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-458066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=embed-certs-458066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T20_09_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 20:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-458066
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 20:25:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 20:24:35 +0000   Fri, 06 Sep 2024 20:09:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 20:24:35 +0000   Fri, 06 Sep 2024 20:09:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 20:24:35 +0000   Fri, 06 Sep 2024 20:09:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 20:24:35 +0000   Fri, 06 Sep 2024 20:09:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    embed-certs-458066
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c773c140511b4e9ca1fd1ead399a4e72
	  System UUID:                c773c140-511b-4e9c-a1fd-1ead399a4e72
	  Boot ID:                    2dadd490-81d8-412f-9cc4-b0b6e2179136
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-br45p                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-gtlxq                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-458066                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-458066             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-458066    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-rzx2f                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-458066             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-74kzz               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-458066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-458066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-458066 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-458066 event: Registered Node embed-certs-458066 in Controller
	
	
	==> dmesg <==
	[  +0.050295] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040174] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.780626] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.467512] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.620471] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep 6 20:04] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.057434] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056663] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.194576] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.120122] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.293621] systemd-fstab-generator[699]: Ignoring "noauto" option for root device
	[  +4.235437] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +1.956944] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.060731] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.544535] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.758269] kauditd_printk_skb: 87 callbacks suppressed
	[Sep 6 20:08] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.314390] systemd-fstab-generator[2549]: Ignoring "noauto" option for root device
	[Sep 6 20:09] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.660149] systemd-fstab-generator[2870]: Ignoring "noauto" option for root device
	[  +5.390436] systemd-fstab-generator[2983]: Ignoring "noauto" option for root device
	[  +0.124985] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.145270] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [5f30c0a5d7a13b5a7143ad119b5b65b7d84f9933225688694c3927007ce8208e] <==
	{"level":"info","ts":"2024-09-06T20:09:01.905000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-06T20:09:01.905058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgPreVoteResp from 86c29206b457f123 at term 1"}
	{"level":"info","ts":"2024-09-06T20:09:01.905094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:01.905118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgVoteResp from 86c29206b457f123 at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:01.905151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became leader at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:01.905176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86c29206b457f123 elected leader 86c29206b457f123 at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:01.908086Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"86c29206b457f123","local-member-attributes":"{Name:embed-certs-458066 ClientURLs:[https://192.168.39.118:2379]}","request-path":"/0/members/86c29206b457f123/attributes","cluster-id":"56e4fbef5627b38f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T20:09:01.909829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:09:01.909856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:09:01.923157Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:09:01.909951Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:01.925691Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T20:09:01.926901Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:09:01.927157Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T20:09:01.927225Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T20:09:01.928282Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.118:2379"}
	{"level":"info","ts":"2024-09-06T20:09:01.930272Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"56e4fbef5627b38f","local-member-id":"86c29206b457f123","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:01.930456Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:01.930507Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:19:02.055480Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":689}
	{"level":"info","ts":"2024-09-06T20:19:02.064363Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":689,"took":"8.486339ms","hash":4038972021,"current-db-size-bytes":2387968,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2387968,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-09-06T20:19:02.064435Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4038972021,"revision":689,"compact-revision":-1}
	{"level":"info","ts":"2024-09-06T20:24:02.063185Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":931}
	{"level":"info","ts":"2024-09-06T20:24:02.067793Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":931,"took":"4.139496ms","hash":735649072,"current-db-size-bytes":2387968,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1597440,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-06T20:24:02.067854Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":735649072,"revision":931,"compact-revision":689}
	
	
	==> kernel <==
	 20:25:34 up 21 min,  0 users,  load average: 0.17, 0.18, 0.12
	Linux embed-certs-458066 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3c4dcf1da46f860ab0d70d0478786996b10b91b427863964edcc8c26ce450672] <==
	I0906 20:22:04.734037       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:22:04.734131       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:24:03.730948       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:24:03.731847       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0906 20:24:04.733194       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:24:04.733255       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0906 20:24:04.733194       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:24:04.733343       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:24:04.734712       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:24:04.734802       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:25:04.735696       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:25:04.735826       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0906 20:25:04.736054       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:25:04.736151       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:25:04.737008       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:25:04.738194       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [6b9354d01c92c82f9751f0c5001763ce1a1b2d8897a98cb74a25f2686ec0357d] <==
	W0906 20:08:54.776538       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.804214       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.861208       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.868896       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.920708       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.924275       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.943381       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.969254       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:54.970864       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.029157       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.054970       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.087108       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.093599       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.135923       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.137296       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.179697       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.192291       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.403973       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.505681       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.542929       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.551391       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.614851       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.670380       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.744727       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:08:55.848199       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0a869559af2c6de5b7dcb71ef5b628f00cb225f2afe49e3da71ccd3beeb5b7b0] <==
	E0906 20:20:10.716436       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:20:11.276601       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:20:25.467839       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="209.302µs"
	I0906 20:20:37.466268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="91.223µs"
	E0906 20:20:40.723827       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:20:41.284669       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:21:10.730698       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:21:11.293015       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:21:40.737570       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:21:41.300679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:22:10.745285       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:22:11.311617       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:22:40.752178       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:22:41.324009       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:23:10.758939       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:23:11.332588       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:23:40.766706       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:23:41.341607       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:24:10.772627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:24:11.358715       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:24:35.658447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-458066"
	E0906 20:24:40.779341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:24:41.368925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:25:10.786470       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:25:11.378691       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f743811765445b814dcf080d2da3c45480620c42cd79fa8c2de33f996dd26c70] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 20:09:12.604217       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 20:09:12.616720       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.118"]
	E0906 20:09:12.618914       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 20:09:12.699347       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 20:09:12.699396       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 20:09:12.699431       1 server_linux.go:169] "Using iptables Proxier"
	I0906 20:09:12.712997       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 20:09:12.713323       1 server.go:483] "Version info" version="v1.31.0"
	I0906 20:09:12.713341       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:09:12.714825       1 config.go:197] "Starting service config controller"
	I0906 20:09:12.714851       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 20:09:12.714877       1 config.go:104] "Starting endpoint slice config controller"
	I0906 20:09:12.714882       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 20:09:12.715658       1 config.go:326] "Starting node config controller"
	I0906 20:09:12.715669       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 20:09:12.820932       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 20:09:12.820977       1 shared_informer.go:320] Caches are synced for node config
	I0906 20:09:12.821007       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0967ba02d355613d37db995ed77ff29c0e033806e963c18202dedeb7a6dc4c83] <==
	W0906 20:09:04.594040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 20:09:04.594232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.595431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:09:04.595466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.605043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 20:09:04.605076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.613026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:04.613059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.642978       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 20:09:04.643042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.685155       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 20:09:04.685211       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0906 20:09:04.728309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 20:09:04.728369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.747629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:04.747813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.852053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:04.852454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.860006       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 20:09:04.860149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:04.972957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0906 20:09:04.973159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:05.030703       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:05.031086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0906 20:09:06.937005       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 20:24:33 embed-certs-458066 kubelet[2877]: E0906 20:24:33.447082    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:24:36 embed-certs-458066 kubelet[2877]: E0906 20:24:36.678474    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654276678253045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:36 embed-certs-458066 kubelet[2877]: E0906 20:24:36.678525    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654276678253045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:46 embed-certs-458066 kubelet[2877]: E0906 20:24:46.449004    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:24:46 embed-certs-458066 kubelet[2877]: E0906 20:24:46.679832    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654286679514147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:46 embed-certs-458066 kubelet[2877]: E0906 20:24:46.679869    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654286679514147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:56 embed-certs-458066 kubelet[2877]: E0906 20:24:56.681670    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654296681316621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:56 embed-certs-458066 kubelet[2877]: E0906 20:24:56.681737    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654296681316621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:00 embed-certs-458066 kubelet[2877]: E0906 20:25:00.449060    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:25:06 embed-certs-458066 kubelet[2877]: E0906 20:25:06.467100    2877 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 20:25:06 embed-certs-458066 kubelet[2877]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 20:25:06 embed-certs-458066 kubelet[2877]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 20:25:06 embed-certs-458066 kubelet[2877]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 20:25:06 embed-certs-458066 kubelet[2877]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 20:25:06 embed-certs-458066 kubelet[2877]: E0906 20:25:06.687163    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654306686442146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:06 embed-certs-458066 kubelet[2877]: E0906 20:25:06.687347    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654306686442146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:13 embed-certs-458066 kubelet[2877]: E0906 20:25:13.446240    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:25:16 embed-certs-458066 kubelet[2877]: E0906 20:25:16.689431    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654316689075789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:16 embed-certs-458066 kubelet[2877]: E0906 20:25:16.689475    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654316689075789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:25 embed-certs-458066 kubelet[2877]: E0906 20:25:25.461460    2877 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 06 20:25:25 embed-certs-458066 kubelet[2877]: E0906 20:25:25.461542    2877 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 06 20:25:25 embed-certs-458066 kubelet[2877]: E0906 20:25:25.461876    2877 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb8ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-74kzz_kube-system(5de1ac37-3f32-44f5-a2ba-e0a3173782ae): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 06 20:25:25 embed-certs-458066 kubelet[2877]: E0906 20:25:25.463407    2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-74kzz" podUID="5de1ac37-3f32-44f5-a2ba-e0a3173782ae"
	Sep 06 20:25:26 embed-certs-458066 kubelet[2877]: E0906 20:25:26.692314    2877 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654326691833553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:26 embed-certs-458066 kubelet[2877]: E0906 20:25:26.692360    2877 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654326691833553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [20a310412e4fc593a868b560923edaf2a2d97a8781f3bf198ddef6fcbabc30ea] <==
	I0906 20:09:13.742019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 20:09:13.755050       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 20:09:13.755265       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 20:09:13.769441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 20:09:13.769721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-458066_5b9d34f8-d4c0-47e9-8998-bdb11653cc78!
	I0906 20:09:13.776591       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3138066-1db7-4a57-be2d-23292dc46eb3", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-458066_5b9d34f8-d4c0-47e9-8998-bdb11653cc78 became leader
	I0906 20:09:13.871167       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-458066_5b9d34f8-d4c0-47e9-8998-bdb11653cc78!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-458066 -n embed-certs-458066
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-458066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-74kzz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-458066 describe pod metrics-server-6867b74b74-74kzz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-458066 describe pod metrics-server-6867b74b74-74kzz: exit status 1 (63.477044ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-74kzz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-458066 describe pod metrics-server-6867b74b74-74kzz: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (429.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (455.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-06 20:26:20.562298639 +0000 UTC m=+7027.052052172
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-653828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-653828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.464µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-653828 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-653828 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-653828 logs -n 25: (1.161186363s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-653828  | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC | 06 Sep 24 19:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC |                     |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-504385                  | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-458066                 | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-843298        | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653828       | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-843298             | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:24 UTC | 06 Sep 24 20:24 UTC |
	| start   | -p newest-cni-113806 --memory=2200 --alsologtostderr   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:24 UTC | 06 Sep 24 20:25 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 20:24 UTC | 06 Sep 24 20:24 UTC |
	| addons  | enable metrics-server -p newest-cni-113806             | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:25 UTC | 06 Sep 24 20:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-113806                                   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:25 UTC | 06 Sep 24 20:25 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-113806                  | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:25 UTC | 06 Sep 24 20:25 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-113806 --memory=2200 --alsologtostderr   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:25 UTC | 06 Sep 24 20:26 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 20:25 UTC | 06 Sep 24 20:25 UTC |
	| image   | newest-cni-113806 image list                           | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:26 UTC | 06 Sep 24 20:26 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-113806                                   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:26 UTC | 06 Sep 24 20:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-113806                                   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:26 UTC | 06 Sep 24 20:26 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-113806                                   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:26 UTC | 06 Sep 24 20:26 UTC |
	| delete  | -p newest-cni-113806                                   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:26 UTC | 06 Sep 24 20:26 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 20:25:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:25:32.073827   80582 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:25:32.073954   80582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:25:32.073963   80582 out.go:358] Setting ErrFile to fd 2...
	I0906 20:25:32.073968   80582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:25:32.074174   80582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:25:32.074734   80582 out.go:352] Setting JSON to false
	I0906 20:25:32.075693   80582 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7681,"bootTime":1725646651,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:25:32.075752   80582 start.go:139] virtualization: kvm guest
	I0906 20:25:32.078125   80582 out.go:177] * [newest-cni-113806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:25:32.079359   80582 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:25:32.079367   80582 notify.go:220] Checking for updates...
	I0906 20:25:32.080786   80582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:25:32.082192   80582 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:25:32.083261   80582 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:25:32.084554   80582 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:25:32.085765   80582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:25:32.087207   80582 config.go:182] Loaded profile config "newest-cni-113806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:25:32.087653   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:25:32.087723   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:25:32.103007   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33223
	I0906 20:25:32.103350   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:25:32.103930   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:25:32.103958   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:25:32.104254   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:25:32.104469   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:32.104730   80582 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:25:32.105156   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:25:32.105198   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:25:32.121545   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0906 20:25:32.121956   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:25:32.122415   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:25:32.122450   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:25:32.122788   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:25:32.122997   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:32.160138   80582 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 20:25:32.161400   80582 start.go:297] selected driver: kvm2
	I0906 20:25:32.161415   80582 start.go:901] validating driver "kvm2" against &{Name:newest-cni-113806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-113806 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:25:32.161520   80582 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:25:32.162198   80582 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:25:32.162267   80582 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:25:32.178230   80582 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:25:32.178660   80582 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 20:25:32.178700   80582 cni.go:84] Creating CNI manager for ""
	I0906 20:25:32.178711   80582 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:25:32.178760   80582 start.go:340] cluster config:
	{Name:newest-cni-113806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-113806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:25:32.178898   80582 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:25:32.180636   80582 out.go:177] * Starting "newest-cni-113806" primary control-plane node in "newest-cni-113806" cluster
	I0906 20:25:32.181601   80582 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:25:32.181633   80582 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:25:32.181645   80582 cache.go:56] Caching tarball of preloaded images
	I0906 20:25:32.181727   80582 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:25:32.181746   80582 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 20:25:32.181834   80582 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/config.json ...
	I0906 20:25:32.182005   80582 start.go:360] acquireMachinesLock for newest-cni-113806: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:25:32.182044   80582 start.go:364] duration metric: took 22.052µs to acquireMachinesLock for "newest-cni-113806"
	I0906 20:25:32.182062   80582 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:25:32.182071   80582 fix.go:54] fixHost starting: 
	I0906 20:25:32.182326   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:25:32.182359   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:25:32.197736   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I0906 20:25:32.198109   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:25:32.198588   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:25:32.198614   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:25:32.198954   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:25:32.199150   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:32.199343   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetState
	I0906 20:25:32.201206   80582 fix.go:112] recreateIfNeeded on newest-cni-113806: state=Stopped err=<nil>
	I0906 20:25:32.201251   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	W0906 20:25:32.201413   80582 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:25:32.203758   80582 out.go:177] * Restarting existing kvm2 VM for "newest-cni-113806" ...
	I0906 20:25:32.205036   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Start
	I0906 20:25:32.205210   80582 main.go:141] libmachine: (newest-cni-113806) Ensuring networks are active...
	I0906 20:25:32.206200   80582 main.go:141] libmachine: (newest-cni-113806) Ensuring network default is active
	I0906 20:25:32.206524   80582 main.go:141] libmachine: (newest-cni-113806) Ensuring network mk-newest-cni-113806 is active
	I0906 20:25:32.206916   80582 main.go:141] libmachine: (newest-cni-113806) Getting domain xml...
	I0906 20:25:32.207663   80582 main.go:141] libmachine: (newest-cni-113806) Creating domain...
	I0906 20:25:33.482001   80582 main.go:141] libmachine: (newest-cni-113806) Waiting to get IP...
	I0906 20:25:33.482941   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:33.483296   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:33.483383   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:33.483288   80617 retry.go:31] will retry after 188.273329ms: waiting for machine to come up
	I0906 20:25:33.673844   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:33.674241   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:33.674268   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:33.674184   80617 retry.go:31] will retry after 329.193658ms: waiting for machine to come up
	I0906 20:25:34.004649   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:34.005241   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:34.005273   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:34.005203   80617 retry.go:31] will retry after 322.165109ms: waiting for machine to come up
	I0906 20:25:34.328661   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:34.329156   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:34.329184   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:34.329113   80617 retry.go:31] will retry after 491.753446ms: waiting for machine to come up
	I0906 20:25:34.822666   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:34.823203   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:34.823240   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:34.823123   80617 retry.go:31] will retry after 545.051035ms: waiting for machine to come up
	I0906 20:25:35.369960   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:35.370488   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:35.370528   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:35.370453   80617 retry.go:31] will retry after 843.180228ms: waiting for machine to come up
	I0906 20:25:36.215395   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:36.215817   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:36.215842   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:36.215768   80617 retry.go:31] will retry after 1.090714512s: waiting for machine to come up
	I0906 20:25:37.308329   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:37.308845   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:37.308879   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:37.308820   80617 retry.go:31] will retry after 1.140959288s: waiting for machine to come up
	I0906 20:25:38.451185   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:38.451629   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:38.451645   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:38.451574   80617 retry.go:31] will retry after 1.771257789s: waiting for machine to come up
	I0906 20:25:40.224498   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:40.224935   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:40.224958   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:40.224894   80617 retry.go:31] will retry after 1.772206314s: waiting for machine to come up
	I0906 20:25:41.998401   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:41.998739   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:41.998765   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:41.998701   80617 retry.go:31] will retry after 2.308763413s: waiting for machine to come up
	I0906 20:25:44.310267   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:44.310761   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:44.310789   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:44.310717   80617 retry.go:31] will retry after 2.353216656s: waiting for machine to come up
	I0906 20:25:46.667159   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:46.667605   80582 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:25:46.667656   80582 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:25:46.667550   80617 retry.go:31] will retry after 3.777869513s: waiting for machine to come up
	I0906 20:25:50.448891   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.449311   80582 main.go:141] libmachine: (newest-cni-113806) Found IP for machine: 192.168.72.88
	I0906 20:25:50.449329   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has current primary IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.449335   80582 main.go:141] libmachine: (newest-cni-113806) Reserving static IP address...
	I0906 20:25:50.449681   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "newest-cni-113806", mac: "52:54:00:3d:27:d2", ip: "192.168.72.88"} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:50.449699   80582 main.go:141] libmachine: (newest-cni-113806) Reserved static IP address: 192.168.72.88
	I0906 20:25:50.449711   80582 main.go:141] libmachine: (newest-cni-113806) DBG | skip adding static IP to network mk-newest-cni-113806 - found existing host DHCP lease matching {name: "newest-cni-113806", mac: "52:54:00:3d:27:d2", ip: "192.168.72.88"}
	I0906 20:25:50.449721   80582 main.go:141] libmachine: (newest-cni-113806) DBG | Getting to WaitForSSH function...
	I0906 20:25:50.449736   80582 main.go:141] libmachine: (newest-cni-113806) Waiting for SSH to be available...
	I0906 20:25:50.451624   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.451903   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:50.451930   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.452068   80582 main.go:141] libmachine: (newest-cni-113806) DBG | Using SSH client type: external
	I0906 20:25:50.452094   80582 main.go:141] libmachine: (newest-cni-113806) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa (-rw-------)
	I0906 20:25:50.452120   80582 main.go:141] libmachine: (newest-cni-113806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:25:50.452135   80582 main.go:141] libmachine: (newest-cni-113806) DBG | About to run SSH command:
	I0906 20:25:50.452152   80582 main.go:141] libmachine: (newest-cni-113806) DBG | exit 0
	I0906 20:25:50.576702   80582 main.go:141] libmachine: (newest-cni-113806) DBG | SSH cmd err, output: <nil>: 
	I0906 20:25:50.577086   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetConfigRaw
	I0906 20:25:50.577690   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetIP
	I0906 20:25:50.579888   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.580232   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:50.580267   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.580437   80582 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/config.json ...
	I0906 20:25:50.580622   80582 machine.go:93] provisionDockerMachine start ...
	I0906 20:25:50.580639   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:50.580845   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:50.582755   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.583031   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:50.583061   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.583154   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:25:50.583313   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:50.583447   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:50.583566   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:25:50.583709   80582 main.go:141] libmachine: Using SSH client type: native
	I0906 20:25:50.583909   80582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0906 20:25:50.583922   80582 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:25:50.689285   80582 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:25:50.689318   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetMachineName
	I0906 20:25:50.689579   80582 buildroot.go:166] provisioning hostname "newest-cni-113806"
	I0906 20:25:50.689595   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetMachineName
	I0906 20:25:50.689773   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:50.692199   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.692536   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:50.692569   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.692742   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:25:50.692934   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:50.693066   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:50.693192   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:25:50.693333   80582 main.go:141] libmachine: Using SSH client type: native
	I0906 20:25:50.693515   80582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0906 20:25:50.693533   80582 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-113806 && echo "newest-cni-113806" | sudo tee /etc/hostname
	I0906 20:25:50.813701   80582 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-113806
	
	I0906 20:25:50.813729   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:50.816339   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.816703   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:50.816755   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.816897   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:25:50.817099   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:50.817262   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:50.817385   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:25:50.817549   80582 main.go:141] libmachine: Using SSH client type: native
	I0906 20:25:50.817781   80582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0906 20:25:50.817801   80582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-113806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-113806/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-113806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:25:50.930470   80582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:25:50.930509   80582 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:25:50.930536   80582 buildroot.go:174] setting up certificates
	I0906 20:25:50.930555   80582 provision.go:84] configureAuth start
	I0906 20:25:50.930573   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetMachineName
	I0906 20:25:50.930915   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetIP
	I0906 20:25:50.933808   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.934238   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:50.934269   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.934350   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:50.936523   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.936918   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:50.936946   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:50.937047   80582 provision.go:143] copyHostCerts
	I0906 20:25:50.937119   80582 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:25:50.937143   80582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:25:50.937220   80582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:25:50.937345   80582 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:25:50.937356   80582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:25:50.937393   80582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:25:50.937479   80582 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:25:50.937490   80582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:25:50.937524   80582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:25:50.937606   80582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.newest-cni-113806 san=[127.0.0.1 192.168.72.88 localhost minikube newest-cni-113806]
	I0906 20:25:51.024880   80582 provision.go:177] copyRemoteCerts
	I0906 20:25:51.024934   80582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:25:51.024959   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:51.027582   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.027884   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:51.027906   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.028118   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:25:51.028286   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:51.028432   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:25:51.028584   80582 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa Username:docker}
	I0906 20:25:51.110929   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:25:51.135573   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:25:51.159004   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0906 20:25:51.182969   80582 provision.go:87] duration metric: took 252.397615ms to configureAuth
	I0906 20:25:51.183004   80582 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:25:51.183204   80582 config.go:182] Loaded profile config "newest-cni-113806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:25:51.183278   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:51.185889   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.186229   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:51.186259   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.186442   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:25:51.186632   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:51.186868   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:51.187027   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:25:51.187195   80582 main.go:141] libmachine: Using SSH client type: native
	I0906 20:25:51.187397   80582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0906 20:25:51.187426   80582 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:25:51.418911   80582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:25:51.418938   80582 machine.go:96] duration metric: took 838.302885ms to provisionDockerMachine
	I0906 20:25:51.418949   80582 start.go:293] postStartSetup for "newest-cni-113806" (driver="kvm2")
	I0906 20:25:51.418960   80582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:25:51.418974   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:51.419270   80582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:25:51.419299   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:51.422129   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.422418   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:51.422456   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.422575   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:25:51.422770   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:51.422931   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:25:51.423054   80582 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa Username:docker}
	I0906 20:25:51.504179   80582 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:25:51.508229   80582 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:25:51.508254   80582 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:25:51.508324   80582 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:25:51.508395   80582 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:25:51.508479   80582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:25:51.518169   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:25:51.542651   80582 start.go:296] duration metric: took 123.687416ms for postStartSetup
	I0906 20:25:51.542689   80582 fix.go:56] duration metric: took 19.360618002s for fixHost
	I0906 20:25:51.542708   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:51.545852   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.546181   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:51.546204   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.546354   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:25:51.546560   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:51.546755   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:51.546938   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:25:51.547125   80582 main.go:141] libmachine: Using SSH client type: native
	I0906 20:25:51.547279   80582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.88 22 <nil> <nil>}
	I0906 20:25:51.547289   80582 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:25:51.649810   80582 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725654351.606166429
	
	I0906 20:25:51.649840   80582 fix.go:216] guest clock: 1725654351.606166429
	I0906 20:25:51.649848   80582 fix.go:229] Guest: 2024-09-06 20:25:51.606166429 +0000 UTC Remote: 2024-09-06 20:25:51.542692977 +0000 UTC m=+19.504518026 (delta=63.473452ms)
	I0906 20:25:51.649885   80582 fix.go:200] guest clock delta is within tolerance: 63.473452ms
	I0906 20:25:51.649891   80582 start.go:83] releasing machines lock for "newest-cni-113806", held for 19.467836614s
	I0906 20:25:51.649933   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:51.650201   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetIP
	I0906 20:25:51.652940   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.653279   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:51.653309   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.653444   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:51.653931   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:51.654088   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:25:51.654182   80582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:25:51.654218   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:51.654342   80582 ssh_runner.go:195] Run: cat /version.json
	I0906 20:25:51.654371   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:25:51.656898   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.657234   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.657296   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:51.657324   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.657439   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:25:51.657619   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:51.657635   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:51.657656   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:51.657809   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:25:51.657743   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:25:51.657945   80582 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa Username:docker}
	I0906 20:25:51.658059   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:25:51.658178   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:25:51.658351   80582 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa Username:docker}
	I0906 20:25:51.752956   80582 ssh_runner.go:195] Run: systemctl --version
	I0906 20:25:51.759759   80582 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:25:51.906001   80582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:25:51.912893   80582 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:25:51.912963   80582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:25:51.930351   80582 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:25:51.930378   80582 start.go:495] detecting cgroup driver to use...
	I0906 20:25:51.930433   80582 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:25:51.947341   80582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:25:51.961490   80582 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:25:51.961558   80582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:25:51.975283   80582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:25:51.988957   80582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:25:52.104307   80582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:25:52.258036   80582 docker.go:233] disabling docker service ...
	I0906 20:25:52.258102   80582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:25:52.273574   80582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:25:52.286526   80582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:25:52.437354   80582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:25:52.558343   80582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:25:52.572958   80582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:25:52.592078   80582 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:25:52.592144   80582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:25:52.602773   80582 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:25:52.602856   80582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:25:52.613756   80582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:25:52.624689   80582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:25:52.635559   80582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:25:52.647080   80582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:25:52.657772   80582 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:25:52.676600   80582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:25:52.686995   80582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:25:52.696483   80582 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:25:52.696535   80582 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:25:52.709473   80582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:25:52.720199   80582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:25:52.843900   80582 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:25:52.946311   80582 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:25:52.946383   80582 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:25:52.951628   80582 start.go:563] Will wait 60s for crictl version
	I0906 20:25:52.951691   80582 ssh_runner.go:195] Run: which crictl
	I0906 20:25:52.956005   80582 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:25:52.995627   80582 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:25:52.995717   80582 ssh_runner.go:195] Run: crio --version
	I0906 20:25:53.027730   80582 ssh_runner.go:195] Run: crio --version
	I0906 20:25:53.058246   80582 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:25:53.059705   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetIP
	I0906 20:25:53.062261   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:53.062550   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:25:53.062573   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:25:53.062767   80582 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0906 20:25:53.067072   80582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:25:53.080835   80582 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0906 20:25:53.082011   80582 kubeadm.go:883] updating cluster {Name:newest-cni-113806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-113806 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:25:53.082155   80582 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:25:53.082226   80582 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:25:53.118052   80582 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:25:53.118114   80582 ssh_runner.go:195] Run: which lz4
	I0906 20:25:53.122016   80582 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:25:53.125981   80582 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:25:53.126008   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:25:54.526345   80582 crio.go:462] duration metric: took 1.404361409s to copy over tarball
	I0906 20:25:54.526445   80582 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:25:56.601691   80582 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.075217326s)
	I0906 20:25:56.601718   80582 crio.go:469] duration metric: took 2.07533988s to extract the tarball
	I0906 20:25:56.601725   80582 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:25:56.638533   80582 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:25:56.685561   80582 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:25:56.685584   80582 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:25:56.685591   80582 kubeadm.go:934] updating node { 192.168.72.88 8443 v1.31.0 crio true true} ...
	I0906 20:25:56.685699   80582 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-113806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-113806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:25:56.685775   80582 ssh_runner.go:195] Run: crio config
	I0906 20:25:56.736583   80582 cni.go:84] Creating CNI manager for ""
	I0906 20:25:56.736606   80582 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:25:56.736624   80582 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0906 20:25:56.736651   80582 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.88 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-113806 NodeName:newest-cni-113806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:25:56.736833   80582 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.88
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-113806"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:25:56.736912   80582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:25:56.747705   80582 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:25:56.747763   80582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:25:56.757472   80582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0906 20:25:56.773932   80582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:25:56.791514   80582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0906 20:25:56.809683   80582 ssh_runner.go:195] Run: grep 192.168.72.88	control-plane.minikube.internal$ /etc/hosts
	I0906 20:25:56.813532   80582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:25:56.827228   80582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:25:56.972216   80582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:25:56.990235   80582 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806 for IP: 192.168.72.88
	I0906 20:25:56.990259   80582 certs.go:194] generating shared ca certs ...
	I0906 20:25:56.990276   80582 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:25:56.990437   80582 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:25:56.990511   80582 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:25:56.990532   80582 certs.go:256] generating profile certs ...
	I0906 20:25:56.990757   80582 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/client.key
	I0906 20:25:56.990843   80582 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/apiserver.key.857359ff
	I0906 20:25:56.990890   80582 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/proxy-client.key
	I0906 20:25:56.991018   80582 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:25:56.991057   80582 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:25:56.991071   80582 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:25:56.991107   80582 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:25:56.991132   80582 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:25:56.991164   80582 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:25:56.991233   80582 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:25:56.991814   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:25:57.027324   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:25:57.057958   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:25:57.090095   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:25:57.120569   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 20:25:57.149252   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:25:57.177821   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:25:57.202021   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:25:57.225228   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:25:57.249106   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:25:57.273534   80582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:25:57.297788   80582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:25:57.314839   80582 ssh_runner.go:195] Run: openssl version
	I0906 20:25:57.320678   80582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:25:57.331225   80582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:25:57.336340   80582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:25:57.336396   80582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:25:57.342232   80582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:25:57.352576   80582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:25:57.362954   80582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:25:57.367406   80582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:25:57.367457   80582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:25:57.372897   80582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:25:57.383190   80582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:25:57.393666   80582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:25:57.398027   80582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:25:57.398077   80582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:25:57.403587   80582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:25:57.414045   80582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:25:57.418370   80582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:25:57.424256   80582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:25:57.429866   80582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:25:57.435885   80582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:25:57.442280   80582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:25:57.447983   80582 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:25:57.453612   80582 kubeadm.go:392] StartCluster: {Name:newest-cni-113806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-113806 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Ne
twork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:25:57.453699   80582 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:25:57.453758   80582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:25:57.492139   80582 cri.go:89] found id: ""
	I0906 20:25:57.492208   80582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:25:57.503114   80582 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:25:57.503135   80582 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:25:57.503188   80582 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:25:57.512463   80582 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:25:57.513055   80582 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-113806" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:25:57.513304   80582 kubeconfig.go:62] /home/jenkins/minikube-integration/19576-6021/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-113806" cluster setting kubeconfig missing "newest-cni-113806" context setting]
	I0906 20:25:57.513733   80582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:25:57.514923   80582 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:25:57.524301   80582 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.88
	I0906 20:25:57.524338   80582 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:25:57.524351   80582 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:25:57.524401   80582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:25:57.567574   80582 cri.go:89] found id: ""
	I0906 20:25:57.567652   80582 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:25:57.583208   80582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:25:57.592580   80582 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:25:57.592598   80582 kubeadm.go:157] found existing configuration files:
	
	I0906 20:25:57.592643   80582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:25:57.601236   80582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:25:57.601299   80582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:25:57.610225   80582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:25:57.618890   80582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:25:57.618956   80582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:25:57.628349   80582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:25:57.637376   80582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:25:57.637445   80582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:25:57.646493   80582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:25:57.655801   80582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:25:57.655859   80582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:25:57.665229   80582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:25:57.674434   80582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:25:57.777806   80582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:25:58.719019   80582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:25:58.935226   80582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:25:59.002117   80582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:25:59.104213   80582 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:25:59.104327   80582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:25:59.604456   80582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:26:00.104396   80582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:26:00.604620   80582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:26:01.104838   80582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:26:01.170297   80582 api_server.go:72] duration metric: took 2.066081623s to wait for apiserver process to appear ...
	I0906 20:26:01.170334   80582 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:26:01.170356   80582 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0906 20:26:01.170950   80582 api_server.go:269] stopped: https://192.168.72.88:8443/healthz: Get "https://192.168.72.88:8443/healthz": dial tcp 192.168.72.88:8443: connect: connection refused
	I0906 20:26:01.670653   80582 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0906 20:26:03.827254   80582 api_server.go:279] https://192.168.72.88:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:26:03.827288   80582 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:26:03.827301   80582 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0906 20:26:03.859866   80582 api_server.go:279] https://192.168.72.88:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:26:03.859898   80582 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:26:04.171035   80582 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0906 20:26:04.189002   80582 api_server.go:279] https://192.168.72.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:26:04.189032   80582 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:26:04.670565   80582 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0906 20:26:04.674971   80582 api_server.go:279] https://192.168.72.88:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:26:04.675010   80582 api_server.go:103] status: https://192.168.72.88:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:26:05.170500   80582 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0906 20:26:05.181685   80582 api_server.go:279] https://192.168.72.88:8443/healthz returned 200:
	ok
	I0906 20:26:05.190875   80582 api_server.go:141] control plane version: v1.31.0
	I0906 20:26:05.190906   80582 api_server.go:131] duration metric: took 4.020565071s to wait for apiserver health ...
	I0906 20:26:05.190916   80582 cni.go:84] Creating CNI manager for ""
	I0906 20:26:05.190924   80582 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:26:05.192964   80582 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:26:05.195071   80582 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:26:05.208978   80582 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:26:05.230823   80582 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:26:05.239543   80582 system_pods.go:59] 8 kube-system pods found
	I0906 20:26:05.239585   80582 system_pods.go:61] "coredns-6f6b679f8f-4xswg" [efce49c1-320c-420a-b4a9-ed48a7cf7b67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:26:05.239596   80582 system_pods.go:61] "etcd-newest-cni-113806" [dd83926b-e92f-447e-a901-197ba85c37f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:26:05.239607   80582 system_pods.go:61] "kube-apiserver-newest-cni-113806" [f02f9e04-377a-450b-b7bc-ce8941b57b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:26:05.239616   80582 system_pods.go:61] "kube-controller-manager-newest-cni-113806" [c33c08ee-95c2-40bf-8bfb-1ac3d27e2714] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:26:05.239625   80582 system_pods.go:61] "kube-proxy-j5kcv" [1b1dc947-a6b6-4944-9417-0f4ca76d08db] Running
	I0906 20:26:05.239634   80582 system_pods.go:61] "kube-scheduler-newest-cni-113806" [0fb8e867-6e9c-4800-8bc5-5488ff97c1d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:26:05.239643   80582 system_pods.go:61] "metrics-server-6867b74b74-mfxlh" [f3e9c452-1351-41dc-bbd5-affd793ed0f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:26:05.239650   80582 system_pods.go:61] "storage-provisioner" [73c77135-0408-4a06-a76e-fb7bd82b32c9] Running
	I0906 20:26:05.239659   80582 system_pods.go:74] duration metric: took 8.817919ms to wait for pod list to return data ...
	I0906 20:26:05.239677   80582 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:26:05.245187   80582 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:26:05.245210   80582 node_conditions.go:123] node cpu capacity is 2
	I0906 20:26:05.245222   80582 node_conditions.go:105] duration metric: took 5.540963ms to run NodePressure ...
	I0906 20:26:05.245240   80582 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:26:05.514971   80582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:26:05.528108   80582 ops.go:34] apiserver oom_adj: -16
	I0906 20:26:05.528132   80582 kubeadm.go:597] duration metric: took 8.024988336s to restartPrimaryControlPlane
	I0906 20:26:05.528142   80582 kubeadm.go:394] duration metric: took 8.074537077s to StartCluster
	I0906 20:26:05.528163   80582 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:26:05.528240   80582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:26:05.529270   80582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:26:05.529511   80582 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:26:05.529640   80582 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:26:05.529735   80582 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-113806"
	I0906 20:26:05.529751   80582 addons.go:69] Setting metrics-server=true in profile "newest-cni-113806"
	I0906 20:26:05.529774   80582 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-113806"
	I0906 20:26:05.529785   80582 addons.go:234] Setting addon metrics-server=true in "newest-cni-113806"
	W0906 20:26:05.529795   80582 addons.go:243] addon storage-provisioner should already be in state true
	W0906 20:26:05.529797   80582 addons.go:243] addon metrics-server should already be in state true
	I0906 20:26:05.529796   80582 addons.go:69] Setting dashboard=true in profile "newest-cni-113806"
	I0906 20:26:05.529773   80582 addons.go:69] Setting default-storageclass=true in profile "newest-cni-113806"
	I0906 20:26:05.529833   80582 addons.go:234] Setting addon dashboard=true in "newest-cni-113806"
	I0906 20:26:05.529837   80582 host.go:66] Checking if "newest-cni-113806" exists ...
	I0906 20:26:05.529840   80582 host.go:66] Checking if "newest-cni-113806" exists ...
	W0906 20:26:05.529846   80582 addons.go:243] addon dashboard should already be in state true
	I0906 20:26:05.529843   80582 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-113806"
	I0906 20:26:05.529739   80582 config.go:182] Loaded profile config "newest-cni-113806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:26:05.529895   80582 host.go:66] Checking if "newest-cni-113806" exists ...
	I0906 20:26:05.530160   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:26:05.530195   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:26:05.530213   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:26:05.530235   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:26:05.530235   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:26:05.530244   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:26:05.530271   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:26:05.530319   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:26:05.531676   80582 out.go:177] * Verifying Kubernetes components...
	I0906 20:26:05.533195   80582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:26:05.546080   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44325
	I0906 20:26:05.546108   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45821
	I0906 20:26:05.546262   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40825
	I0906 20:26:05.546524   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:26:05.546524   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:26:05.546641   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:26:05.546985   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:26:05.547006   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:26:05.547170   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:26:05.547186   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:26:05.547314   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:26:05.547329   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:26:05.547378   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:26:05.547955   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:26:05.547999   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:26:05.548368   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:26:05.548403   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:26:05.548535   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40305
	I0906 20:26:05.548696   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetState
	I0906 20:26:05.548966   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:26:05.549094   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:26:05.549122   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:26:05.549464   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:26:05.549478   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:26:05.549840   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:26:05.550369   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:26:05.550395   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:26:05.551915   80582 addons.go:234] Setting addon default-storageclass=true in "newest-cni-113806"
	W0906 20:26:05.551937   80582 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:26:05.551968   80582 host.go:66] Checking if "newest-cni-113806" exists ...
	I0906 20:26:05.552343   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:26:05.552370   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:26:05.566428   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45921
	I0906 20:26:05.567182   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:26:05.567753   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:26:05.567775   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:26:05.568104   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:26:05.568187   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0906 20:26:05.568298   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0906 20:26:05.568626   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:26:05.568694   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:26:05.568781   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0906 20:26:05.568936   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetState
	I0906 20:26:05.569226   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:26:05.569246   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:26:05.569278   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:26:05.569387   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:26:05.569397   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:26:05.569592   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:26:05.569671   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:26:05.570109   80582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:26:05.570138   80582 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:26:05.570172   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:26:05.570187   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:26:05.570508   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetState
	I0906 20:26:05.570563   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:26:05.570788   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:26:05.570791   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetState
	I0906 20:26:05.572297   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:26:05.572417   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:26:05.572756   80582 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:26:05.573918   80582 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:26:05.573924   80582 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0906 20:26:05.573920   80582 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:26:05.574002   80582 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:26:05.574021   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:26:05.575017   80582 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:26:05.575033   80582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:26:05.575049   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:26:05.576438   80582 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0906 20:26:05.577057   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:26:05.577448   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0906 20:26:05.577460   80582 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0906 20:26:05.577490   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:26:05.577500   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:26:05.577509   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:26:05.577831   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:26:05.578005   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:26:05.578181   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:26:05.578354   80582 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa Username:docker}
	I0906 20:26:05.578460   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:26:05.578933   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:26:05.578969   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:26:05.579233   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:26:05.579431   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:26:05.579706   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:26:05.579839   80582 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa Username:docker}
	I0906 20:26:05.580307   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:26:05.580682   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:26:05.580712   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:26:05.580947   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:26:05.581107   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:26:05.581219   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:26:05.581339   80582 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa Username:docker}
	I0906 20:26:05.590209   80582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0906 20:26:05.590591   80582 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:26:05.591186   80582 main.go:141] libmachine: Using API Version  1
	I0906 20:26:05.591216   80582 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:26:05.591554   80582 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:26:05.591766   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetState
	I0906 20:26:05.593398   80582 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:26:05.593639   80582 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:26:05.593656   80582 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:26:05.593675   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHHostname
	I0906 20:26:05.595924   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:26:05.596287   80582 main.go:141] libmachine: (newest-cni-113806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:27:d2", ip: ""} in network mk-newest-cni-113806: {Iface:virbr4 ExpiryTime:2024-09-06 21:25:43 +0000 UTC Type:0 Mac:52:54:00:3d:27:d2 Iaid: IPaddr:192.168.72.88 Prefix:24 Hostname:newest-cni-113806 Clientid:01:52:54:00:3d:27:d2}
	I0906 20:26:05.596321   80582 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined IP address 192.168.72.88 and MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:26:05.596390   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHPort
	I0906 20:26:05.596551   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHKeyPath
	I0906 20:26:05.596694   80582 main.go:141] libmachine: (newest-cni-113806) Calling .GetSSHUsername
	I0906 20:26:05.596839   80582 sshutil.go:53] new ssh client: &{IP:192.168.72.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa Username:docker}
	I0906 20:26:05.717798   80582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:26:05.735874   80582 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:26:05.735963   80582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:26:05.750419   80582 api_server.go:72] duration metric: took 220.865541ms to wait for apiserver process to appear ...
	I0906 20:26:05.750448   80582 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:26:05.750464   80582 api_server.go:253] Checking apiserver healthz at https://192.168.72.88:8443/healthz ...
	I0906 20:26:05.754380   80582 api_server.go:279] https://192.168.72.88:8443/healthz returned 200:
	ok
	I0906 20:26:05.755219   80582 api_server.go:141] control plane version: v1.31.0
	I0906 20:26:05.755243   80582 api_server.go:131] duration metric: took 4.788384ms to wait for apiserver health ...
	I0906 20:26:05.755253   80582 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:26:05.760419   80582 system_pods.go:59] 8 kube-system pods found
	I0906 20:26:05.760448   80582 system_pods.go:61] "coredns-6f6b679f8f-4xswg" [efce49c1-320c-420a-b4a9-ed48a7cf7b67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:26:05.760455   80582 system_pods.go:61] "etcd-newest-cni-113806" [dd83926b-e92f-447e-a901-197ba85c37f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:26:05.760465   80582 system_pods.go:61] "kube-apiserver-newest-cni-113806" [f02f9e04-377a-450b-b7bc-ce8941b57b71] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:26:05.760471   80582 system_pods.go:61] "kube-controller-manager-newest-cni-113806" [c33c08ee-95c2-40bf-8bfb-1ac3d27e2714] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:26:05.760476   80582 system_pods.go:61] "kube-proxy-j5kcv" [1b1dc947-a6b6-4944-9417-0f4ca76d08db] Running
	I0906 20:26:05.760481   80582 system_pods.go:61] "kube-scheduler-newest-cni-113806" [0fb8e867-6e9c-4800-8bc5-5488ff97c1d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:26:05.760496   80582 system_pods.go:61] "metrics-server-6867b74b74-mfxlh" [f3e9c452-1351-41dc-bbd5-affd793ed0f0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:26:05.760504   80582 system_pods.go:61] "storage-provisioner" [73c77135-0408-4a06-a76e-fb7bd82b32c9] Running
	I0906 20:26:05.760510   80582 system_pods.go:74] duration metric: took 5.250661ms to wait for pod list to return data ...
	I0906 20:26:05.760517   80582 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:26:05.763284   80582 default_sa.go:45] found service account: "default"
	I0906 20:26:05.763314   80582 default_sa.go:55] duration metric: took 2.786252ms for default service account to be created ...
	I0906 20:26:05.763328   80582 kubeadm.go:582] duration metric: took 233.778647ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 20:26:05.763348   80582 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:26:05.765646   80582 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:26:05.765668   80582 node_conditions.go:123] node cpu capacity is 2
	I0906 20:26:05.765682   80582 node_conditions.go:105] duration metric: took 2.327588ms to run NodePressure ...
	I0906 20:26:05.765695   80582 start.go:241] waiting for startup goroutines ...
	I0906 20:26:05.804006   80582 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:26:05.804030   80582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:26:05.814934   80582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:26:05.826834   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0906 20:26:05.826859   80582 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0906 20:26:05.838276   80582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:26:05.844042   80582 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:26:05.844069   80582 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:26:05.895475   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0906 20:26:05.895503   80582 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0906 20:26:05.939698   80582 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:26:05.939719   80582 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:26:06.030889   80582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:26:06.032076   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0906 20:26:06.032092   80582 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0906 20:26:06.116792   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0906 20:26:06.116812   80582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0906 20:26:06.249317   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0906 20:26:06.249349   80582 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0906 20:26:06.315222   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0906 20:26:06.315249   80582 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0906 20:26:06.375459   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0906 20:26:06.375487   80582 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0906 20:26:06.400707   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0906 20:26:06.400734   80582 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0906 20:26:06.439409   80582 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 20:26:06.439434   80582 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0906 20:26:06.512674   80582 main.go:141] libmachine: Making call to close driver server
	I0906 20:26:06.512699   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Close
	I0906 20:26:06.513046   80582 main.go:141] libmachine: (newest-cni-113806) DBG | Closing plugin on server side
	I0906 20:26:06.513103   80582 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:26:06.513130   80582 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:26:06.513143   80582 main.go:141] libmachine: Making call to close driver server
	I0906 20:26:06.513156   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Close
	I0906 20:26:06.513385   80582 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:26:06.513400   80582 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:26:06.519236   80582 main.go:141] libmachine: Making call to close driver server
	I0906 20:26:06.519252   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Close
	I0906 20:26:06.519537   80582 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:26:06.519558   80582 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:26:06.544690   80582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0906 20:26:07.797129   80582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.958808183s)
	I0906 20:26:07.797186   80582 main.go:141] libmachine: Making call to close driver server
	I0906 20:26:07.797199   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Close
	I0906 20:26:07.797501   80582 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:26:07.797560   80582 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:26:07.797571   80582 main.go:141] libmachine: Making call to close driver server
	I0906 20:26:07.797580   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Close
	I0906 20:26:07.797532   80582 main.go:141] libmachine: (newest-cni-113806) DBG | Closing plugin on server side
	I0906 20:26:07.797836   80582 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:26:07.797867   80582 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:26:07.797868   80582 main.go:141] libmachine: (newest-cni-113806) DBG | Closing plugin on server side
	I0906 20:26:07.810726   80582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.779804852s)
	I0906 20:26:07.810784   80582 main.go:141] libmachine: Making call to close driver server
	I0906 20:26:07.810793   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Close
	I0906 20:26:07.811066   80582 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:26:07.811087   80582 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:26:07.811095   80582 main.go:141] libmachine: (newest-cni-113806) DBG | Closing plugin on server side
	I0906 20:26:07.811099   80582 main.go:141] libmachine: Making call to close driver server
	I0906 20:26:07.811114   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Close
	I0906 20:26:07.811326   80582 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:26:07.811338   80582 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:26:07.811367   80582 main.go:141] libmachine: (newest-cni-113806) DBG | Closing plugin on server side
	I0906 20:26:07.811374   80582 addons.go:475] Verifying addon metrics-server=true in "newest-cni-113806"
	I0906 20:26:08.416258   80582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.871520073s)
	I0906 20:26:08.416309   80582 main.go:141] libmachine: Making call to close driver server
	I0906 20:26:08.416323   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Close
	I0906 20:26:08.416684   80582 main.go:141] libmachine: (newest-cni-113806) DBG | Closing plugin on server side
	I0906 20:26:08.416720   80582 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:26:08.416735   80582 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:26:08.416749   80582 main.go:141] libmachine: Making call to close driver server
	I0906 20:26:08.416757   80582 main.go:141] libmachine: (newest-cni-113806) Calling .Close
	I0906 20:26:08.417073   80582 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:26:08.417089   80582 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:26:08.418476   80582 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-113806 addons enable metrics-server
	
	I0906 20:26:08.420180   80582 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0906 20:26:08.421646   80582 addons.go:510] duration metric: took 2.892006252s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0906 20:26:08.421691   80582 start.go:246] waiting for cluster config update ...
	I0906 20:26:08.421706   80582 start.go:255] writing updated cluster config ...
	I0906 20:26:08.422021   80582 ssh_runner.go:195] Run: rm -f paused
	I0906 20:26:08.483624   80582 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:26:08.485306   80582 out.go:177] * Done! kubectl is now configured to use "newest-cni-113806" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.131894793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654381131868632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=308efe89-1ce3-4bcb-9395-dd00237d7acf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.132333331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51af8d37-efc1-4423-b363-584321b508b9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.132397530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51af8d37-efc1-4423-b363-584321b508b9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.132626989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d,PodSandboxId:61671a3f844efbab17ecdebfec8cd4a97449ed3a0dcb74521c17faf3d68ad00c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653376406262904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2a4afa2-1018-41f6-aecf-1b6300f520a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24,PodSandboxId:4c0e3cf407781899a0b3ee235bceab29ae10c540b4ff14e385534ff8049fa367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653376009255546,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-v4r9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84854d53-cb74-42c8-bb74-92536fcd300d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f,PodSandboxId:3fa4ea69acc96abe485531b92ae9c4f859fa06c660a935f0b0d1300713c5685e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653375920428793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h9hv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: bf6ec352-3abf-4738-8f19-8a70916e98a9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0,PodSandboxId:89c54fb230094f33fc5afb9e0a82e09f39e5a9590c27496d264a8e99d4e8d90a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1725653375260910062,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e0658b-592e-4d52-b431-f1227e742e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e,PodSandboxId:e0ed1ad3b6b6b30fe7aec4853cbfbb12acd9ed0f1f11ac8b9c47671fd776786a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653364321397039
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e042d563b1c2c161c2ba7b23067597,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625,PodSandboxId:570593df6df4be795a2deb2fbf510e950ca14f1af60062a868d44526ddc26040,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653364265291946,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0768cc3e6c91c8a2be732353a197244b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08,PodSandboxId:f7d73a66b27785f50e1a9d465aaf568d888b5d64527eb419e9bade59ceca6777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653364238419268,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b,PodSandboxId:e2e3004b3fce6d199df1d0ea32a3e939728166d2d354875630dddb7e7ac30e92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653364201830039,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3333647b9fedcba3932ef7cb0607608,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733,PodSandboxId:28592630c813981f553f072c644797adfab13f879ae03621750edd21de770422,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653075044979637,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51af8d37-efc1-4423-b363-584321b508b9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.169304637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6c94e2c-4416-4633-b810-135addb1a936 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.169395348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6c94e2c-4416-4633-b810-135addb1a936 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.171532750Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=808e73d2-2643-4cc0-8d49-1fdeb1ff589f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.172287395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654381172258736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=808e73d2-2643-4cc0-8d49-1fdeb1ff589f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.173829896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7299abf-8f76-463d-9eef-c80870dfa045 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.173901283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7299abf-8f76-463d-9eef-c80870dfa045 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.174125945Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d,PodSandboxId:61671a3f844efbab17ecdebfec8cd4a97449ed3a0dcb74521c17faf3d68ad00c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653376406262904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2a4afa2-1018-41f6-aecf-1b6300f520a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24,PodSandboxId:4c0e3cf407781899a0b3ee235bceab29ae10c540b4ff14e385534ff8049fa367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653376009255546,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-v4r9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84854d53-cb74-42c8-bb74-92536fcd300d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f,PodSandboxId:3fa4ea69acc96abe485531b92ae9c4f859fa06c660a935f0b0d1300713c5685e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653375920428793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h9hv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: bf6ec352-3abf-4738-8f19-8a70916e98a9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0,PodSandboxId:89c54fb230094f33fc5afb9e0a82e09f39e5a9590c27496d264a8e99d4e8d90a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1725653375260910062,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e0658b-592e-4d52-b431-f1227e742e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e,PodSandboxId:e0ed1ad3b6b6b30fe7aec4853cbfbb12acd9ed0f1f11ac8b9c47671fd776786a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653364321397039
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e042d563b1c2c161c2ba7b23067597,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625,PodSandboxId:570593df6df4be795a2deb2fbf510e950ca14f1af60062a868d44526ddc26040,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653364265291946,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0768cc3e6c91c8a2be732353a197244b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08,PodSandboxId:f7d73a66b27785f50e1a9d465aaf568d888b5d64527eb419e9bade59ceca6777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653364238419268,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b,PodSandboxId:e2e3004b3fce6d199df1d0ea32a3e939728166d2d354875630dddb7e7ac30e92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653364201830039,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3333647b9fedcba3932ef7cb0607608,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733,PodSandboxId:28592630c813981f553f072c644797adfab13f879ae03621750edd21de770422,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653075044979637,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7299abf-8f76-463d-9eef-c80870dfa045 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.213562603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=020c6960-7952-489e-b913-43e46eec057b name=/runtime.v1.RuntimeService/Version
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.213632543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=020c6960-7952-489e-b913-43e46eec057b name=/runtime.v1.RuntimeService/Version
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.214458798Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0af3dc48-78ed-43c9-b4d9-7f0e629886dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.215018164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654381214993225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0af3dc48-78ed-43c9-b4d9-7f0e629886dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.215433014Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b2fbfd9-7ef5-423c-8d51-3475e80df177 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.215500420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b2fbfd9-7ef5-423c-8d51-3475e80df177 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.215693973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d,PodSandboxId:61671a3f844efbab17ecdebfec8cd4a97449ed3a0dcb74521c17faf3d68ad00c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653376406262904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2a4afa2-1018-41f6-aecf-1b6300f520a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24,PodSandboxId:4c0e3cf407781899a0b3ee235bceab29ae10c540b4ff14e385534ff8049fa367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653376009255546,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-v4r9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84854d53-cb74-42c8-bb74-92536fcd300d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f,PodSandboxId:3fa4ea69acc96abe485531b92ae9c4f859fa06c660a935f0b0d1300713c5685e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653375920428793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h9hv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: bf6ec352-3abf-4738-8f19-8a70916e98a9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0,PodSandboxId:89c54fb230094f33fc5afb9e0a82e09f39e5a9590c27496d264a8e99d4e8d90a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1725653375260910062,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e0658b-592e-4d52-b431-f1227e742e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e,PodSandboxId:e0ed1ad3b6b6b30fe7aec4853cbfbb12acd9ed0f1f11ac8b9c47671fd776786a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653364321397039
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e042d563b1c2c161c2ba7b23067597,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625,PodSandboxId:570593df6df4be795a2deb2fbf510e950ca14f1af60062a868d44526ddc26040,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653364265291946,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0768cc3e6c91c8a2be732353a197244b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08,PodSandboxId:f7d73a66b27785f50e1a9d465aaf568d888b5d64527eb419e9bade59ceca6777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653364238419268,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b,PodSandboxId:e2e3004b3fce6d199df1d0ea32a3e939728166d2d354875630dddb7e7ac30e92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653364201830039,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3333647b9fedcba3932ef7cb0607608,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733,PodSandboxId:28592630c813981f553f072c644797adfab13f879ae03621750edd21de770422,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653075044979637,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b2fbfd9-7ef5-423c-8d51-3475e80df177 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.249966176Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9051d934-10cd-4451-95c2-108d536e3653 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.250079421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9051d934-10cd-4451-95c2-108d536e3653 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.251240890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=657fedf8-d992-42e5-8bb6-749da0d5c8ab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.251661507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654381251638146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=657fedf8-d992-42e5-8bb6-749da0d5c8ab name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.252257577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1681189c-d4d9-4d8f-8fdc-58c4f842c8e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.252330363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1681189c-d4d9-4d8f-8fdc-58c4f842c8e8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:26:21 default-k8s-diff-port-653828 crio[705]: time="2024-09-06 20:26:21.252558219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d,PodSandboxId:61671a3f844efbab17ecdebfec8cd4a97449ed3a0dcb74521c17faf3d68ad00c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653376406262904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2a4afa2-1018-41f6-aecf-1b6300f520a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24,PodSandboxId:4c0e3cf407781899a0b3ee235bceab29ae10c540b4ff14e385534ff8049fa367,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653376009255546,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-v4r9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84854d53-cb74-42c8-bb74-92536fcd300d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f,PodSandboxId:3fa4ea69acc96abe485531b92ae9c4f859fa06c660a935f0b0d1300713c5685e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653375920428793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h9hv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: bf6ec352-3abf-4738-8f19-8a70916e98a9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0,PodSandboxId:89c54fb230094f33fc5afb9e0a82e09f39e5a9590c27496d264a8e99d4e8d90a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1725653375260910062,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e0658b-592e-4d52-b431-f1227e742e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e,PodSandboxId:e0ed1ad3b6b6b30fe7aec4853cbfbb12acd9ed0f1f11ac8b9c47671fd776786a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653364321397039
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e042d563b1c2c161c2ba7b23067597,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625,PodSandboxId:570593df6df4be795a2deb2fbf510e950ca14f1af60062a868d44526ddc26040,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653364265291946,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0768cc3e6c91c8a2be732353a197244b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08,PodSandboxId:f7d73a66b27785f50e1a9d465aaf568d888b5d64527eb419e9bade59ceca6777,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653364238419268,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b,PodSandboxId:e2e3004b3fce6d199df1d0ea32a3e939728166d2d354875630dddb7e7ac30e92,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653364201830039,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3333647b9fedcba3932ef7cb0607608,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733,PodSandboxId:28592630c813981f553f072c644797adfab13f879ae03621750edd21de770422,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653075044979637,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653828,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9043571ea96c1d42c42d5650a6306757,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1681189c-d4d9-4d8f-8fdc-58c4f842c8e8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6872d43d4bac5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   61671a3f844ef       storage-provisioner
	92f5b8c5f6328       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   4c0e3cf407781       coredns-6f6b679f8f-v4r9m
	de3c8cdb6a45a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   3fa4ea69acc96       coredns-6f6b679f8f-h9hv9
	fa883ab3c2a42       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   89c54fb230094       kube-proxy-7846f
	f96410431602d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   e0ed1ad3b6b6b       kube-scheduler-default-k8s-diff-port-653828
	ea0e2891a9f3d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   570593df6df4b       etcd-default-k8s-diff-port-653828
	a7100d8ec8ed1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   f7d73a66b2778       kube-apiserver-default-k8s-diff-port-653828
	c0fe32967e411       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   e2e3004b3fce6       kube-controller-manager-default-k8s-diff-port-653828
	2d3779deb7d72       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   28592630c8139       kube-apiserver-default-k8s-diff-port-653828
	
	
	==> coredns [92f5b8c5f632895d883fbb544fc5f36d6ebc43564a52477c07945a6287cbbb24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [de3c8cdb6a45a07214d197ad93e75971b29c3ecc288cdccc9923ee083245a91f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-653828
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-653828
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=default-k8s-diff-port-653828
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T20_09_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 20:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-653828
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 20:26:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 20:24:57 +0000   Fri, 06 Sep 2024 20:09:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 20:24:57 +0000   Fri, 06 Sep 2024 20:09:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 20:24:57 +0000   Fri, 06 Sep 2024 20:09:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 20:24:57 +0000   Fri, 06 Sep 2024 20:09:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.16
	  Hostname:    default-k8s-diff-port-653828
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ecf3f0842600481aa4cf97145c6b8004
	  System UUID:                ecf3f084-2600-481a-a4cf-97145c6b8004
	  Boot ID:                    7c4b00cb-e45a-48b2-8d6e-bc259b9684bb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-h9hv9                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-v4r9m                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-653828                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-653828             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-653828    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-7846f                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-653828             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-nwk7f                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-653828 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-653828 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-653828 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-653828 event: Registered Node default-k8s-diff-port-653828 in Controller
	
	
	==> dmesg <==
	[  +0.054159] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040179] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.907754] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.569305] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.631495] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.514601] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.061919] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063415] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.200764] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.117346] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.280152] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.240064] systemd-fstab-generator[787]: Ignoring "noauto" option for root device
	[  +1.979589] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +0.066666] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.533168] kauditd_printk_skb: 69 callbacks suppressed
	[  +9.293318] kauditd_printk_skb: 90 callbacks suppressed
	[Sep 6 20:09] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.616644] systemd-fstab-generator[2549]: Ignoring "noauto" option for root device
	[  +4.495490] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.891108] systemd-fstab-generator[2875]: Ignoring "noauto" option for root device
	[  +4.913870] systemd-fstab-generator[2985]: Ignoring "noauto" option for root device
	[  +0.100765] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.010924] kauditd_printk_skb: 87 callbacks suppressed
	
	
	==> etcd [ea0e2891a9f3d7070f1b2b40519fa57723fd596c1ac79375d0a965a516245625] <==
	{"level":"info","ts":"2024-09-06T20:09:24.888085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72247325455803ad became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:24.888109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72247325455803ad received MsgVoteResp from 72247325455803ad at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:24.888136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72247325455803ad became leader at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:24.888161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72247325455803ad elected leader 72247325455803ad at term 2"}
	{"level":"info","ts":"2024-09-06T20:09:24.893024Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:24.895156Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"72247325455803ad","local-member-attributes":"{Name:default-k8s-diff-port-653828 ClientURLs:[https://192.168.50.16:2379]}","request-path":"/0/members/72247325455803ad/attributes","cluster-id":"ca7d65c2cc2a573","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T20:09:24.897829Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ca7d65c2cc2a573","local-member-id":"72247325455803ad","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:24.897939Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:24.897979Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:09:24.898024Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:09:24.898320Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:09:24.899948Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:09:24.903673Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T20:09:24.906401Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:09:24.907481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.16:2379"}
	{"level":"info","ts":"2024-09-06T20:09:24.908022Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T20:09:24.908094Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T20:19:25.079585Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":687}
	{"level":"info","ts":"2024-09-06T20:19:25.093056Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":687,"took":"13.10572ms","hash":2179085669,"current-db-size-bytes":2203648,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2203648,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-09-06T20:19:25.093118Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2179085669,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-09-06T20:24:25.087227Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":930}
	{"level":"info","ts":"2024-09-06T20:24:25.090973Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":930,"took":"3.024315ms","hash":2578093203,"current-db-size-bytes":2203648,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1527808,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-09-06T20:24:25.091058Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2578093203,"revision":930,"compact-revision":687}
	{"level":"warn","ts":"2024-09-06T20:26:00.520509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.060973ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-06T20:26:00.520669Z","caller":"traceutil/trace.go:171","msg":"trace[740450369] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1255; }","duration":"144.266474ms","start":"2024-09-06T20:26:00.376374Z","end":"2024-09-06T20:26:00.520641Z","steps":["trace[740450369] 'range keys from in-memory index tree'  (duration: 144.04455ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:26:21 up 22 min,  0 users,  load average: 0.10, 0.07, 0.09
	Linux default-k8s-diff-port-653828 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2d3779deb7d72135701a2ac92cd1d924be7014e72efab533cfc9e13fc9cd9733] <==
	W0906 20:09:15.602579       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.605099       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.794620       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.808731       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.852556       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.890161       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:15.897072       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:16.054721       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.309688       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.373458       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.436881       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.645275       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:19.765259       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.016108       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.021995       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.130455       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.262138       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.271259       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.379197       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.401122       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.421176       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.422509       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.452508       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.663900       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:09:20.715709       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a7100d8ec8ed109092f3ae87316812c6ae9274e92dca3559146ba2517ba1ec08] <==
	I0906 20:22:27.861403       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:22:27.861460       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:24:26.860901       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:24:26.861252       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0906 20:24:27.862689       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:24:27.862799       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0906 20:24:27.862961       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:24:27.863050       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:24:27.864000       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:24:27.865132       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:25:27.864744       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:25:27.864875       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0906 20:25:27.865985       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:25:27.866070       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:25:27.866108       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:25:27.867342       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c0fe32967e4110573cb6d097be950c99edb979c60718974323203291d8d6b03b] <==
	E0906 20:21:03.834370       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:21:04.408613       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:21:33.847059       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:21:34.419641       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:22:03.854272       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:22:04.430627       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:22:33.861181       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:22:34.438723       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:23:03.867019       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:23:04.447516       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:23:33.876714       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:23:34.455256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:24:03.889047       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:24:04.463297       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:24:33.896063       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:24:34.471900       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:24:57.850824       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-653828"
	E0906 20:25:03.903418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:25:04.480840       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:25:33.915868       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:25:34.497020       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:25:45.841995       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="274.29µs"
	I0906 20:25:57.843079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="134.416µs"
	E0906 20:26:03.923002       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:26:04.505535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fa883ab3c2a4289d5db610b9f63e801a91613eef4ee4de48ce4d1da6064ac2d0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 20:09:35.932980       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 20:09:36.061830       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.16"]
	E0906 20:09:36.062207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 20:09:36.462736       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 20:09:36.463336       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 20:09:36.463373       1 server_linux.go:169] "Using iptables Proxier"
	I0906 20:09:36.473938       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 20:09:36.474220       1 server.go:483] "Version info" version="v1.31.0"
	I0906 20:09:36.474234       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:09:36.476661       1 config.go:197] "Starting service config controller"
	I0906 20:09:36.476677       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 20:09:36.476695       1 config.go:104] "Starting endpoint slice config controller"
	I0906 20:09:36.476698       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 20:09:36.476725       1 config.go:326] "Starting node config controller"
	I0906 20:09:36.476729       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 20:09:36.578210       1 shared_informer.go:320] Caches are synced for node config
	I0906 20:09:36.578817       1 shared_informer.go:320] Caches are synced for service config
	I0906 20:09:36.578831       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f96410431602de98d0197451fda7c9d7dcd9567e6cc77b4b5d86becd313e505e] <==
	W0906 20:09:26.891960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:09:26.892059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.710825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 20:09:27.710876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.841876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:09:27.841940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.842992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:27.843035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.918133       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 20:09:27.918205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.956695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:27.956747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.968052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 20:09:27.968110       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.971360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 20:09:27.971407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:27.979998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 20:09:27.980095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:28.018346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:09:28.018832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:28.167086       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 20:09:28.167146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:09:28.214024       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 20:09:28.214217       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 20:09:30.787543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 20:25:29 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:29.840021    2882 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 20:25:29 default-k8s-diff-port-653828 kubelet[2882]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 20:25:29 default-k8s-diff-port-653828 kubelet[2882]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 20:25:29 default-k8s-diff-port-653828 kubelet[2882]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 20:25:29 default-k8s-diff-port-653828 kubelet[2882]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 20:25:30 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:30.109400    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654330108837434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:30 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:30.109470    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654330108837434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:34 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:34.837065    2882 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 06 20:25:34 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:34.837256    2882 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 06 20:25:34 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:34.838973    2882 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st4h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-nwk7f_kube-system(6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Sep 06 20:25:34 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:34.840442    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	Sep 06 20:25:40 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:40.112916    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654340112195625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:40 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:40.112963    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654340112195625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:45 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:45.824593    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	Sep 06 20:25:50 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:50.115199    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654350114717436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:50 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:50.115699    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654350114717436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:25:57 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:25:57.822843    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	Sep 06 20:26:00 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:26:00.117612    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654360117245539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:26:00 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:26:00.118057    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654360117245539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:26:08 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:26:08.824863    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	Sep 06 20:26:10 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:26:10.120837    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654370120270826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:26:10 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:26:10.121351    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654370120270826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:26:20 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:26:20.123018    2882 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654380122583181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:26:20 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:26:20.123058    2882 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654380122583181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:26:20 default-k8s-diff-port-653828 kubelet[2882]: E0906 20:26:20.823105    2882 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-nwk7f" podUID="6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe"
	
	
	==> storage-provisioner [6872d43d4bac54e3320b11898e953d5a5e21d20cf62e8a4248a34d02034b598d] <==
	I0906 20:09:36.567056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 20:09:36.586146       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 20:09:36.586298       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 20:09:36.615073       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 20:09:36.615552       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653828_3cd0d618-c03c-4aec-a5cc-4b988c4af110!
	I0906 20:09:36.626868       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"573f1391-b9fd-4ded-9a19-90e70383b09a", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-653828_3cd0d618-c03c-4aec-a5cc-4b988c4af110 became leader
	I0906 20:09:36.718215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653828_3cd0d618-c03c-4aec-a5cc-4b988c4af110!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-653828 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-nwk7f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-653828 describe pod metrics-server-6867b74b74-nwk7f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-653828 describe pod metrics-server-6867b74b74-nwk7f: exit status 1 (60.703941ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-nwk7f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-653828 describe pod metrics-server-6867b74b74-nwk7f: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (455.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (315.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-504385 -n no-preload-504385
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-06 20:24:53.091336935 +0000 UTC m=+6939.581090473
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-504385 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-504385 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.524µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-504385 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504385 -n no-preload-504385
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-504385 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-504385 logs -n 25: (1.222823036s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo find                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo crio                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-603826                                       | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-859361 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | disable-driver-mounts-859361                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:57 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-504385             | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-458066            | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653828  | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC | 06 Sep 24 19:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC |                     |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-504385                  | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-458066                 | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-843298        | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653828       | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-843298             | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:24 UTC | 06 Sep 24 20:24 UTC |
	| start   | -p newest-cni-113806 --memory=2200 --alsologtostderr   | newest-cni-113806            | jenkins | v1.34.0 | 06 Sep 24 20:24 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 20:24:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:24:32.425068   79846 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:24:32.425179   79846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:24:32.425190   79846 out.go:358] Setting ErrFile to fd 2...
	I0906 20:24:32.425194   79846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:24:32.425435   79846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:24:32.426086   79846 out.go:352] Setting JSON to false
	I0906 20:24:32.427053   79846 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7621,"bootTime":1725646651,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:24:32.427112   79846 start.go:139] virtualization: kvm guest
	I0906 20:24:32.429457   79846 out.go:177] * [newest-cni-113806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:24:32.430704   79846 notify.go:220] Checking for updates...
	I0906 20:24:32.430710   79846 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:24:32.431932   79846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:24:32.433128   79846 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:24:32.434413   79846 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:24:32.435838   79846 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:24:32.437067   79846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:24:32.438922   79846 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:24:32.439065   79846 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:24:32.439205   79846 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:24:32.439346   79846 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:24:32.477506   79846 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 20:24:32.479007   79846 start.go:297] selected driver: kvm2
	I0906 20:24:32.479020   79846 start.go:901] validating driver "kvm2" against <nil>
	I0906 20:24:32.479034   79846 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:24:32.479773   79846 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:24:32.479879   79846 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:24:32.495825   79846 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:24:32.495875   79846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0906 20:24:32.495901   79846 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0906 20:24:32.496114   79846 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0906 20:24:32.496183   79846 cni.go:84] Creating CNI manager for ""
	I0906 20:24:32.496201   79846 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:24:32.496217   79846 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 20:24:32.496294   79846 start.go:340] cluster config:
	{Name:newest-cni-113806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-113806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:24:32.496410   79846 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:24:32.498592   79846 out.go:177] * Starting "newest-cni-113806" primary control-plane node in "newest-cni-113806" cluster
	I0906 20:24:32.499897   79846 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:24:32.499941   79846 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:24:32.499954   79846 cache.go:56] Caching tarball of preloaded images
	I0906 20:24:32.500054   79846 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:24:32.500065   79846 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0906 20:24:32.500168   79846 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/config.json ...
	I0906 20:24:32.500201   79846 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/newest-cni-113806/config.json: {Name:mk73b6d68e9e3941500b25daa559dbe3b78fc22e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:24:32.500366   79846 start.go:360] acquireMachinesLock for newest-cni-113806: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:24:32.500418   79846 start.go:364] duration metric: took 26.62µs to acquireMachinesLock for "newest-cni-113806"
	I0906 20:24:32.500444   79846 start.go:93] Provisioning new machine with config: &{Name:newest-cni-113806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-113806
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:24:32.500520   79846 start.go:125] createHost starting for "" (driver="kvm2")
	I0906 20:24:32.502771   79846 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0906 20:24:32.502947   79846 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:24:32.502999   79846 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:24:32.517937   79846 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I0906 20:24:32.518372   79846 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:24:32.518898   79846 main.go:141] libmachine: Using API Version  1
	I0906 20:24:32.518924   79846 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:24:32.519263   79846 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:24:32.519512   79846 main.go:141] libmachine: (newest-cni-113806) Calling .GetMachineName
	I0906 20:24:32.519666   79846 main.go:141] libmachine: (newest-cni-113806) Calling .DriverName
	I0906 20:24:32.519821   79846 start.go:159] libmachine.API.Create for "newest-cni-113806" (driver="kvm2")
	I0906 20:24:32.519878   79846 client.go:168] LocalClient.Create starting
	I0906 20:24:32.519924   79846 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem
	I0906 20:24:32.519968   79846 main.go:141] libmachine: Decoding PEM data...
	I0906 20:24:32.519984   79846 main.go:141] libmachine: Parsing certificate...
	I0906 20:24:32.520039   79846 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem
	I0906 20:24:32.520060   79846 main.go:141] libmachine: Decoding PEM data...
	I0906 20:24:32.520074   79846 main.go:141] libmachine: Parsing certificate...
	I0906 20:24:32.520089   79846 main.go:141] libmachine: Running pre-create checks...
	I0906 20:24:32.520102   79846 main.go:141] libmachine: (newest-cni-113806) Calling .PreCreateCheck
	I0906 20:24:32.520502   79846 main.go:141] libmachine: (newest-cni-113806) Calling .GetConfigRaw
	I0906 20:24:32.520956   79846 main.go:141] libmachine: Creating machine...
	I0906 20:24:32.520974   79846 main.go:141] libmachine: (newest-cni-113806) Calling .Create
	I0906 20:24:32.521116   79846 main.go:141] libmachine: (newest-cni-113806) Creating KVM machine...
	I0906 20:24:32.522484   79846 main.go:141] libmachine: (newest-cni-113806) DBG | found existing default KVM network
	I0906 20:24:32.523943   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:32.523790   79870 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4e:64:47} reservation:<nil>}
	I0906 20:24:32.525061   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:32.524988   79870 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9b:96:2c} reservation:<nil>}
	I0906 20:24:32.525892   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:32.525823   79870 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:cb:35:15} reservation:<nil>}
	I0906 20:24:32.526904   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:32.526834   79870 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a7760}
	I0906 20:24:32.526921   79846 main.go:141] libmachine: (newest-cni-113806) DBG | created network xml: 
	I0906 20:24:32.526933   79846 main.go:141] libmachine: (newest-cni-113806) DBG | <network>
	I0906 20:24:32.526946   79846 main.go:141] libmachine: (newest-cni-113806) DBG |   <name>mk-newest-cni-113806</name>
	I0906 20:24:32.526959   79846 main.go:141] libmachine: (newest-cni-113806) DBG |   <dns enable='no'/>
	I0906 20:24:32.526968   79846 main.go:141] libmachine: (newest-cni-113806) DBG |   
	I0906 20:24:32.526981   79846 main.go:141] libmachine: (newest-cni-113806) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0906 20:24:32.526992   79846 main.go:141] libmachine: (newest-cni-113806) DBG |     <dhcp>
	I0906 20:24:32.527001   79846 main.go:141] libmachine: (newest-cni-113806) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0906 20:24:32.527014   79846 main.go:141] libmachine: (newest-cni-113806) DBG |     </dhcp>
	I0906 20:24:32.527045   79846 main.go:141] libmachine: (newest-cni-113806) DBG |   </ip>
	I0906 20:24:32.527066   79846 main.go:141] libmachine: (newest-cni-113806) DBG |   
	I0906 20:24:32.527088   79846 main.go:141] libmachine: (newest-cni-113806) DBG | </network>
	I0906 20:24:32.527105   79846 main.go:141] libmachine: (newest-cni-113806) DBG | 
	I0906 20:24:32.532372   79846 main.go:141] libmachine: (newest-cni-113806) DBG | trying to create private KVM network mk-newest-cni-113806 192.168.72.0/24...
	I0906 20:24:32.608489   79846 main.go:141] libmachine: (newest-cni-113806) DBG | private KVM network mk-newest-cni-113806 192.168.72.0/24 created
	I0906 20:24:32.608529   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:32.608424   79870 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:24:32.608542   79846 main.go:141] libmachine: (newest-cni-113806) Setting up store path in /home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806 ...
	I0906 20:24:32.608567   79846 main.go:141] libmachine: (newest-cni-113806) Building disk image from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 20:24:32.608589   79846 main.go:141] libmachine: (newest-cni-113806) Downloading /home/jenkins/minikube-integration/19576-6021/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso...
	I0906 20:24:32.846204   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:32.846070   79870 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/id_rsa...
	I0906 20:24:33.408136   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:33.408021   79870 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/newest-cni-113806.rawdisk...
	I0906 20:24:33.408164   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Writing magic tar header
	I0906 20:24:33.408178   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Writing SSH key tar header
	I0906 20:24:33.408186   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:33.408123   79870 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806 ...
	I0906 20:24:33.408202   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806
	I0906 20:24:33.408252   79846 main.go:141] libmachine: (newest-cni-113806) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806 (perms=drwx------)
	I0906 20:24:33.408288   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube/machines
	I0906 20:24:33.408306   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:24:33.408313   79846 main.go:141] libmachine: (newest-cni-113806) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube/machines (perms=drwxr-xr-x)
	I0906 20:24:33.408326   79846 main.go:141] libmachine: (newest-cni-113806) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021/.minikube (perms=drwxr-xr-x)
	I0906 20:24:33.408335   79846 main.go:141] libmachine: (newest-cni-113806) Setting executable bit set on /home/jenkins/minikube-integration/19576-6021 (perms=drwxrwxr-x)
	I0906 20:24:33.408346   79846 main.go:141] libmachine: (newest-cni-113806) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0906 20:24:33.408355   79846 main.go:141] libmachine: (newest-cni-113806) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0906 20:24:33.408363   79846 main.go:141] libmachine: (newest-cni-113806) Creating domain...
	I0906 20:24:33.408375   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19576-6021
	I0906 20:24:33.408385   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0906 20:24:33.408430   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Checking permissions on dir: /home/jenkins
	I0906 20:24:33.408471   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Checking permissions on dir: /home
	I0906 20:24:33.408487   79846 main.go:141] libmachine: (newest-cni-113806) DBG | Skipping /home - not owner
	I0906 20:24:33.409733   79846 main.go:141] libmachine: (newest-cni-113806) define libvirt domain using xml: 
	I0906 20:24:33.409758   79846 main.go:141] libmachine: (newest-cni-113806) <domain type='kvm'>
	I0906 20:24:33.409769   79846 main.go:141] libmachine: (newest-cni-113806)   <name>newest-cni-113806</name>
	I0906 20:24:33.409784   79846 main.go:141] libmachine: (newest-cni-113806)   <memory unit='MiB'>2200</memory>
	I0906 20:24:33.409797   79846 main.go:141] libmachine: (newest-cni-113806)   <vcpu>2</vcpu>
	I0906 20:24:33.409807   79846 main.go:141] libmachine: (newest-cni-113806)   <features>
	I0906 20:24:33.409822   79846 main.go:141] libmachine: (newest-cni-113806)     <acpi/>
	I0906 20:24:33.409838   79846 main.go:141] libmachine: (newest-cni-113806)     <apic/>
	I0906 20:24:33.409873   79846 main.go:141] libmachine: (newest-cni-113806)     <pae/>
	I0906 20:24:33.409897   79846 main.go:141] libmachine: (newest-cni-113806)     
	I0906 20:24:33.409916   79846 main.go:141] libmachine: (newest-cni-113806)   </features>
	I0906 20:24:33.409933   79846 main.go:141] libmachine: (newest-cni-113806)   <cpu mode='host-passthrough'>
	I0906 20:24:33.409944   79846 main.go:141] libmachine: (newest-cni-113806)   
	I0906 20:24:33.409955   79846 main.go:141] libmachine: (newest-cni-113806)   </cpu>
	I0906 20:24:33.409964   79846 main.go:141] libmachine: (newest-cni-113806)   <os>
	I0906 20:24:33.409974   79846 main.go:141] libmachine: (newest-cni-113806)     <type>hvm</type>
	I0906 20:24:33.409994   79846 main.go:141] libmachine: (newest-cni-113806)     <boot dev='cdrom'/>
	I0906 20:24:33.410008   79846 main.go:141] libmachine: (newest-cni-113806)     <boot dev='hd'/>
	I0906 20:24:33.410034   79846 main.go:141] libmachine: (newest-cni-113806)     <bootmenu enable='no'/>
	I0906 20:24:33.410044   79846 main.go:141] libmachine: (newest-cni-113806)   </os>
	I0906 20:24:33.410053   79846 main.go:141] libmachine: (newest-cni-113806)   <devices>
	I0906 20:24:33.410063   79846 main.go:141] libmachine: (newest-cni-113806)     <disk type='file' device='cdrom'>
	I0906 20:24:33.410080   79846 main.go:141] libmachine: (newest-cni-113806)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/boot2docker.iso'/>
	I0906 20:24:33.410098   79846 main.go:141] libmachine: (newest-cni-113806)       <target dev='hdc' bus='scsi'/>
	I0906 20:24:33.410111   79846 main.go:141] libmachine: (newest-cni-113806)       <readonly/>
	I0906 20:24:33.410122   79846 main.go:141] libmachine: (newest-cni-113806)     </disk>
	I0906 20:24:33.410133   79846 main.go:141] libmachine: (newest-cni-113806)     <disk type='file' device='disk'>
	I0906 20:24:33.410147   79846 main.go:141] libmachine: (newest-cni-113806)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0906 20:24:33.410162   79846 main.go:141] libmachine: (newest-cni-113806)       <source file='/home/jenkins/minikube-integration/19576-6021/.minikube/machines/newest-cni-113806/newest-cni-113806.rawdisk'/>
	I0906 20:24:33.410175   79846 main.go:141] libmachine: (newest-cni-113806)       <target dev='hda' bus='virtio'/>
	I0906 20:24:33.410189   79846 main.go:141] libmachine: (newest-cni-113806)     </disk>
	I0906 20:24:33.410201   79846 main.go:141] libmachine: (newest-cni-113806)     <interface type='network'>
	I0906 20:24:33.410212   79846 main.go:141] libmachine: (newest-cni-113806)       <source network='mk-newest-cni-113806'/>
	I0906 20:24:33.410223   79846 main.go:141] libmachine: (newest-cni-113806)       <model type='virtio'/>
	I0906 20:24:33.410232   79846 main.go:141] libmachine: (newest-cni-113806)     </interface>
	I0906 20:24:33.410254   79846 main.go:141] libmachine: (newest-cni-113806)     <interface type='network'>
	I0906 20:24:33.410273   79846 main.go:141] libmachine: (newest-cni-113806)       <source network='default'/>
	I0906 20:24:33.410286   79846 main.go:141] libmachine: (newest-cni-113806)       <model type='virtio'/>
	I0906 20:24:33.410296   79846 main.go:141] libmachine: (newest-cni-113806)     </interface>
	I0906 20:24:33.410311   79846 main.go:141] libmachine: (newest-cni-113806)     <serial type='pty'>
	I0906 20:24:33.410318   79846 main.go:141] libmachine: (newest-cni-113806)       <target port='0'/>
	I0906 20:24:33.410324   79846 main.go:141] libmachine: (newest-cni-113806)     </serial>
	I0906 20:24:33.410331   79846 main.go:141] libmachine: (newest-cni-113806)     <console type='pty'>
	I0906 20:24:33.410340   79846 main.go:141] libmachine: (newest-cni-113806)       <target type='serial' port='0'/>
	I0906 20:24:33.410346   79846 main.go:141] libmachine: (newest-cni-113806)     </console>
	I0906 20:24:33.410355   79846 main.go:141] libmachine: (newest-cni-113806)     <rng model='virtio'>
	I0906 20:24:33.410364   79846 main.go:141] libmachine: (newest-cni-113806)       <backend model='random'>/dev/random</backend>
	I0906 20:24:33.410373   79846 main.go:141] libmachine: (newest-cni-113806)     </rng>
	I0906 20:24:33.410378   79846 main.go:141] libmachine: (newest-cni-113806)     
	I0906 20:24:33.410383   79846 main.go:141] libmachine: (newest-cni-113806)     
	I0906 20:24:33.410388   79846 main.go:141] libmachine: (newest-cni-113806)   </devices>
	I0906 20:24:33.410392   79846 main.go:141] libmachine: (newest-cni-113806) </domain>
	I0906 20:24:33.410396   79846 main.go:141] libmachine: (newest-cni-113806) 
	I0906 20:24:33.414052   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:7e:67:f5 in network default
	I0906 20:24:33.414626   79846 main.go:141] libmachine: (newest-cni-113806) Ensuring networks are active...
	I0906 20:24:33.414650   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:33.415284   79846 main.go:141] libmachine: (newest-cni-113806) Ensuring network default is active
	I0906 20:24:33.415563   79846 main.go:141] libmachine: (newest-cni-113806) Ensuring network mk-newest-cni-113806 is active
	I0906 20:24:33.415992   79846 main.go:141] libmachine: (newest-cni-113806) Getting domain xml...
	I0906 20:24:33.416617   79846 main.go:141] libmachine: (newest-cni-113806) Creating domain...
	I0906 20:24:34.667761   79846 main.go:141] libmachine: (newest-cni-113806) Waiting to get IP...
	I0906 20:24:34.668549   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:34.669054   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:34.669078   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:34.669023   79870 retry.go:31] will retry after 286.687388ms: waiting for machine to come up
	I0906 20:24:34.957560   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:34.958177   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:34.958208   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:34.958131   79870 retry.go:31] will retry after 367.318166ms: waiting for machine to come up
	I0906 20:24:35.326750   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:35.327262   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:35.327291   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:35.327229   79870 retry.go:31] will retry after 419.135365ms: waiting for machine to come up
	I0906 20:24:35.747821   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:35.748315   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:35.748344   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:35.748261   79870 retry.go:31] will retry after 438.668162ms: waiting for machine to come up
	I0906 20:24:36.189136   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:36.189603   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:36.189633   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:36.189554   79870 retry.go:31] will retry after 682.972275ms: waiting for machine to come up
	I0906 20:24:36.874424   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:36.874808   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:36.874843   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:36.874767   79870 retry.go:31] will retry after 837.856719ms: waiting for machine to come up
	I0906 20:24:37.713774   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:37.714232   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:37.714260   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:37.714190   79870 retry.go:31] will retry after 727.217369ms: waiting for machine to come up
	I0906 20:24:38.443033   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:38.443446   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:38.443476   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:38.443398   79870 retry.go:31] will retry after 1.189719488s: waiting for machine to come up
	I0906 20:24:39.634674   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:39.635165   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:39.635191   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:39.635112   79870 retry.go:31] will retry after 1.803244661s: waiting for machine to come up
	I0906 20:24:41.439507   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:41.440014   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:41.440037   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:41.439974   79870 retry.go:31] will retry after 1.652581192s: waiting for machine to come up
	I0906 20:24:43.093883   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:43.094325   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:43.094355   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:43.094283   79870 retry.go:31] will retry after 2.550887468s: waiting for machine to come up
	I0906 20:24:45.647887   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:45.648205   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:45.648221   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:45.648185   79870 retry.go:31] will retry after 2.700729405s: waiting for machine to come up
	I0906 20:24:48.350984   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:48.351316   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:48.351335   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:48.351292   79870 retry.go:31] will retry after 3.385076724s: waiting for machine to come up
	I0906 20:24:51.737923   79846 main.go:141] libmachine: (newest-cni-113806) DBG | domain newest-cni-113806 has defined MAC address 52:54:00:3d:27:d2 in network mk-newest-cni-113806
	I0906 20:24:51.738348   79846 main.go:141] libmachine: (newest-cni-113806) DBG | unable to find current IP address of domain newest-cni-113806 in network mk-newest-cni-113806
	I0906 20:24:51.738368   79846 main.go:141] libmachine: (newest-cni-113806) DBG | I0906 20:24:51.738311   79870 retry.go:31] will retry after 3.668185299s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.671892783Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2d4fdae5623209a4a9c81bbbadb72d27ef92a9cec8ad8d8baac410a02603ebb7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725653426151250098,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-06T20:10:25.833246012Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c7c9546b6b4f4c399289fa651e3434384010c76fabcaab38485183f5eeccfda4,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-56mkl,Uid:73747864-24bf-42d0-956b-6047a52ed887,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725653426120374128,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-56mkl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73747864-24bf-42d0-956b-6047a52ed887
,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T20:10:25.813382194Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dec01f5a6cb5ff170f39da6190d9eb3c05e7a4534d47e936bd93819b71fcf7ba,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-ffnb7,Uid:59184ee8-fe9e-479d-b298-0ee9818e4a00,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725653425995971574,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-ffnb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59184ee8-fe9e-479d-b298-0ee9818e4a00,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T20:10:24.779080826Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a49d2a2d2ae22949d526811d3867714c9769407d2d8bb11ef4e221a26e0aaa4c,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-lwxzl,Uid:e2df0b29-0770-447f-
8051-fce39e9acff0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725653425888997109,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-lwxzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2df0b29-0770-447f-8051-fce39e9acff0,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T20:10:24.680711633Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ac14735d0ad7eab6615b35ba479b621f02f5cb980cef11dcbeb516b0ec1b021,Metadata:&PodSandboxMetadata{Name:kube-proxy-48s2x,Uid:dd175211-d965-4b1a-a37a-d1e6df47f09b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725653424833522507,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-48s2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd175211-d965-4b1a-a37a-d1e6df47f09b,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-06T20:10:24.526033677Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:84e4b3e7daacf7879bdecebb481070297b782baa94efc4fac759568a69bd114f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-504385,Uid:0bfa0d921a0ce0a55af27a1696709e36,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725653413885184048,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bfa0d921a0ce0a55af27a1696709e36,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0bfa0d921a0ce0a55af27a1696709e36,kubernetes.io/config.seen: 2024-09-06T20:10:13.425071226Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:05b21dab03397f23365b83428df6728dfdf4f3f6d2885a76045d9b67fdebff0b,Metadata:&PodSandboxMeta
data{Name:kube-apiserver-no-preload-504385,Uid:9bad3c201ac54ebb96e54bc9ed809900,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1725653413883062088,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.184:8443,kubernetes.io/config.hash: 9bad3c201ac54ebb96e54bc9ed809900,kubernetes.io/config.seen: 2024-09-06T20:10:13.425070147Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:913147acc93fff54b8207751a9fd92d032d6f000344d6d7c0043cfadc44a49cf,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-504385,Uid:123de27c3b8551a9387ecadceaf69150,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725653413859436130,Labels:map[string]string{component: etcd,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123de27c3b8551a9387ecadceaf69150,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.184:2379,kubernetes.io/config.hash: 123de27c3b8551a9387ecadceaf69150,kubernetes.io/config.seen: 2024-09-06T20:10:13.425066596Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:65431f984a19ec13097627219fb9430474eb25887961ce21ab24c124cefd3a7c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-504385,Uid:a3b99a5d59e524a05b9e2d8501bf6d11,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1725653413852282177,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b99a5d59e524a05b9e2d8501bf6d11,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: a3b99a5d59e524a05b9e2d8501bf6d11,kubernetes.io/config.seen: 2024-09-06T20:10:13.425072122Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=52a0557e-9c94-47b2-9b90-b57f0db82498 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.672457755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5105020-5562-4800-b7c5-c2ff56dab274 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.672510186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5105020-5562-4800-b7c5-c2ff56dab274 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.672714860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d,PodSandboxId:2d4fdae5623209a4a9c81bbbadb72d27ef92a9cec8ad8d8baac410a02603ebb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653426408777595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a,PodSandboxId:dec01f5a6cb5ff170f39da6190d9eb3c05e7a4534d47e936bd93819b71fcf7ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426457280757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ffnb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59184ee8-fe9e-479d-b298-0ee9818e4a00,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8,PodSandboxId:a49d2a2d2ae22949d526811d3867714c9769407d2d8bb11ef4e221a26e0aaa4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426293020635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwxzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2
df0b29-0770-447f-8051-fce39e9acff0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df,PodSandboxId:9ac14735d0ad7eab6615b35ba479b621f02f5cb980cef11dcbeb516b0ec1b021,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725653424969666674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48s2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd175211-d965-4b1a-a37a-d1e6df47f09b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f,PodSandboxId:913147acc93fff54b8207751a9fd92d032d6f000344d6d7c0043cfadc44a49cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653414151172082,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123de27c3b8551a9387ecadceaf69150,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b,PodSandboxId:05b21dab03397f23365b83428df6728dfdf4f3f6d2885a76045d9b67fdebff0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653414103380518,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b,PodSandboxId:84e4b3e7daacf7879bdecebb481070297b782baa94efc4fac759568a69bd114f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653414060527569,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bfa0d921a0ce0a55af27a1696709e36,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131,PodSandboxId:65431f984a19ec13097627219fb9430474eb25887961ce21ab24c124cefd3a7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653413999842100,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b99a5d59e524a05b9e2d8501bf6d11,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5105020-5562-4800-b7c5-c2ff56dab274 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.704266867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ece6c37-ffc4-49fe-99df-406c8f7f8a59 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.704340389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ece6c37-ffc4-49fe-99df-406c8f7f8a59 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.705740879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=079a804f-aa65-499f-b0f1-ae6a96658547 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.706215797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654293706178406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=079a804f-aa65-499f-b0f1-ae6a96658547 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.706856734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09ff11e1-c904-4751-bd30-1652c84cdf85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.706909141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09ff11e1-c904-4751-bd30-1652c84cdf85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.707102154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d,PodSandboxId:2d4fdae5623209a4a9c81bbbadb72d27ef92a9cec8ad8d8baac410a02603ebb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653426408777595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a,PodSandboxId:dec01f5a6cb5ff170f39da6190d9eb3c05e7a4534d47e936bd93819b71fcf7ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426457280757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ffnb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59184ee8-fe9e-479d-b298-0ee9818e4a00,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8,PodSandboxId:a49d2a2d2ae22949d526811d3867714c9769407d2d8bb11ef4e221a26e0aaa4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426293020635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwxzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2
df0b29-0770-447f-8051-fce39e9acff0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df,PodSandboxId:9ac14735d0ad7eab6615b35ba479b621f02f5cb980cef11dcbeb516b0ec1b021,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725653424969666674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48s2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd175211-d965-4b1a-a37a-d1e6df47f09b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f,PodSandboxId:913147acc93fff54b8207751a9fd92d032d6f000344d6d7c0043cfadc44a49cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653414151172082,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123de27c3b8551a9387ecadceaf69150,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b,PodSandboxId:05b21dab03397f23365b83428df6728dfdf4f3f6d2885a76045d9b67fdebff0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653414103380518,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b,PodSandboxId:84e4b3e7daacf7879bdecebb481070297b782baa94efc4fac759568a69bd114f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653414060527569,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bfa0d921a0ce0a55af27a1696709e36,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131,PodSandboxId:65431f984a19ec13097627219fb9430474eb25887961ce21ab24c124cefd3a7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653413999842100,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b99a5d59e524a05b9e2d8501bf6d11,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5378fe314f7a6ccad157d8c7e69480e9a035d341e61e570c71a186f8d71d64,PodSandboxId:3860b04bee19bf7b767c5e11a57b09a688be56266c03bdd875b4842531155254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653127564331448,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09ff11e1-c904-4751-bd30-1652c84cdf85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.745347217Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1ee9212-1d01-4f08-a5f1-a9832c0cd68b name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.745440586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1ee9212-1d01-4f08-a5f1-a9832c0cd68b name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.746486936Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62d1d395-6044-4f65-b7d6-57f12f9f2032 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.746986045Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654293746964856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62d1d395-6044-4f65-b7d6-57f12f9f2032 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.747577184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c950467d-baa1-4baa-b5db-61ca6a85730b name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.747712205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c950467d-baa1-4baa-b5db-61ca6a85730b name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.747960574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d,PodSandboxId:2d4fdae5623209a4a9c81bbbadb72d27ef92a9cec8ad8d8baac410a02603ebb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653426408777595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a,PodSandboxId:dec01f5a6cb5ff170f39da6190d9eb3c05e7a4534d47e936bd93819b71fcf7ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426457280757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ffnb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59184ee8-fe9e-479d-b298-0ee9818e4a00,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8,PodSandboxId:a49d2a2d2ae22949d526811d3867714c9769407d2d8bb11ef4e221a26e0aaa4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426293020635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwxzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2
df0b29-0770-447f-8051-fce39e9acff0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df,PodSandboxId:9ac14735d0ad7eab6615b35ba479b621f02f5cb980cef11dcbeb516b0ec1b021,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725653424969666674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48s2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd175211-d965-4b1a-a37a-d1e6df47f09b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f,PodSandboxId:913147acc93fff54b8207751a9fd92d032d6f000344d6d7c0043cfadc44a49cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653414151172082,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123de27c3b8551a9387ecadceaf69150,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b,PodSandboxId:05b21dab03397f23365b83428df6728dfdf4f3f6d2885a76045d9b67fdebff0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653414103380518,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b,PodSandboxId:84e4b3e7daacf7879bdecebb481070297b782baa94efc4fac759568a69bd114f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653414060527569,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bfa0d921a0ce0a55af27a1696709e36,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131,PodSandboxId:65431f984a19ec13097627219fb9430474eb25887961ce21ab24c124cefd3a7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653413999842100,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b99a5d59e524a05b9e2d8501bf6d11,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5378fe314f7a6ccad157d8c7e69480e9a035d341e61e570c71a186f8d71d64,PodSandboxId:3860b04bee19bf7b767c5e11a57b09a688be56266c03bdd875b4842531155254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653127564331448,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c950467d-baa1-4baa-b5db-61ca6a85730b name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.783891081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1f11c70-c2ae-4111-9e65-11c890830eff name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.783997987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1f11c70-c2ae-4111-9e65-11c890830eff name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.785148676Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56811e63-447d-43c0-9cda-ee74f0a4ba83 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.785486278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654293785465787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56811e63-447d-43c0-9cda-ee74f0a4ba83 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.786002532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b151d196-589d-49dd-928f-8b92da32378f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.786072156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b151d196-589d-49dd-928f-8b92da32378f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:53 no-preload-504385 crio[709]: time="2024-09-06 20:24:53.786276332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d,PodSandboxId:2d4fdae5623209a4a9c81bbbadb72d27ef92a9cec8ad8d8baac410a02603ebb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1725653426408777595,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a,PodSandboxId:dec01f5a6cb5ff170f39da6190d9eb3c05e7a4534d47e936bd93819b71fcf7ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426457280757,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-ffnb7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59184ee8-fe9e-479d-b298-0ee9818e4a00,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8,PodSandboxId:a49d2a2d2ae22949d526811d3867714c9769407d2d8bb11ef4e221a26e0aaa4c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1725653426293020635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-lwxzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2
df0b29-0770-447f-8051-fce39e9acff0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df,PodSandboxId:9ac14735d0ad7eab6615b35ba479b621f02f5cb980cef11dcbeb516b0ec1b021,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1725653424969666674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-48s2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd175211-d965-4b1a-a37a-d1e6df47f09b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f,PodSandboxId:913147acc93fff54b8207751a9fd92d032d6f000344d6d7c0043cfadc44a49cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1725653414151172082,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123de27c3b8551a9387ecadceaf69150,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b,PodSandboxId:05b21dab03397f23365b83428df6728dfdf4f3f6d2885a76045d9b67fdebff0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1725653414103380518,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b,PodSandboxId:84e4b3e7daacf7879bdecebb481070297b782baa94efc4fac759568a69bd114f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1725653414060527569,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bfa0d921a0ce0a55af27a1696709e36,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131,PodSandboxId:65431f984a19ec13097627219fb9430474eb25887961ce21ab24c124cefd3a7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1725653413999842100,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b99a5d59e524a05b9e2d8501bf6d11,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c5378fe314f7a6ccad157d8c7e69480e9a035d341e61e570c71a186f8d71d64,PodSandboxId:3860b04bee19bf7b767c5e11a57b09a688be56266c03bdd875b4842531155254,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1725653127564331448,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-504385,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bad3c201ac54ebb96e54bc9ed809900,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b151d196-589d-49dd-928f-8b92da32378f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6d7f4c6d93e53       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   dec01f5a6cb5f       coredns-6f6b679f8f-ffnb7
	15274b39b451a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   2d4fdae562320       storage-provisioner
	2f41a4d40a24e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   a49d2a2d2ae22       coredns-6f6b679f8f-lwxzl
	6c5b197d3d526       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   14 minutes ago      Running             kube-proxy                0                   9ac14735d0ad7       kube-proxy-48s2x
	badd0b7d7706b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   913147acc93ff       etcd-no-preload-504385
	c171d8f525af6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Running             kube-apiserver            2                   05b21dab03397       kube-apiserver-no-preload-504385
	57091fe8c0c73       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   14 minutes ago      Running             kube-controller-manager   2                   84e4b3e7daacf       kube-controller-manager-no-preload-504385
	3f08497dae4cc       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   14 minutes ago      Running             kube-scheduler            2                   65431f984a19e       kube-scheduler-no-preload-504385
	6c5378fe314f7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 minutes ago      Exited              kube-apiserver            1                   3860b04bee19b       kube-apiserver-no-preload-504385
	
	
	==> coredns [2f41a4d40a24ea78c12c4a62e7ea6f4e09d4ff71e4513cd7d0a8d8dd66996ce8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [6d7f4c6d93e53cbbc0c351c91fb7c74a7de9dd10899a589c1bee05f99af6db6a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-504385
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-504385
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=no-preload-504385
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T20_10_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 20:10:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-504385
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 20:24:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 20:20:43 +0000   Fri, 06 Sep 2024 20:10:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 20:20:43 +0000   Fri, 06 Sep 2024 20:10:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 20:20:43 +0000   Fri, 06 Sep 2024 20:10:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 20:20:43 +0000   Fri, 06 Sep 2024 20:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.184
	  Hostname:    no-preload-504385
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9a3178dcb7145c797377936fb22661e
	  System UUID:                e9a3178d-cb71-45c7-9737-7936fb22661e
	  Boot ID:                    28b88cc4-d161-40d9-993e-423f4a032f1f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-ffnb7                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-lwxzl                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-504385                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-504385             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-504385    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-48s2x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-504385             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-56mkl              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-504385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-504385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-504385 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-504385 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-504385 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-504385 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-504385 event: Registered Node no-preload-504385 in Controller
	
	
	==> dmesg <==
	[  +0.050223] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.230934] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.642640] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep 6 20:05] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.508433] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.060079] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075204] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.193381] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.120241] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.279555] systemd-fstab-generator[700]: Ignoring "noauto" option for root device
	[ +15.938140] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.063317] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.152837] systemd-fstab-generator[1417]: Ignoring "noauto" option for root device
	[  +3.933336] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.202050] kauditd_printk_skb: 57 callbacks suppressed
	[  +8.114658] kauditd_printk_skb: 26 callbacks suppressed
	[Sep 6 20:10] systemd-fstab-generator[3065]: Ignoring "noauto" option for root device
	[  +0.067690] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.499660] systemd-fstab-generator[3386]: Ignoring "noauto" option for root device
	[  +0.087356] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.333236] systemd-fstab-generator[3517]: Ignoring "noauto" option for root device
	[  +0.123882] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.111801] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [badd0b7d7706b2371c40a23bd57b90529c240ed5444d7dc95b4af308c113465f] <==
	{"level":"info","ts":"2024-09-06T20:10:14.529276Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.184:2380"}
	{"level":"info","ts":"2024-09-06T20:10:14.530373Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fd8e42ce2aaa20da","local-member-id":"ac98865638e77ade","added-peer-id":"ac98865638e77ade","added-peer-peer-urls":["https://192.168.61.184:2380"]}
	{"level":"info","ts":"2024-09-06T20:10:15.128729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-06T20:10:15.128872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-06T20:10:15.128982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade received MsgPreVoteResp from ac98865638e77ade at term 1"}
	{"level":"info","ts":"2024-09-06T20:10:15.129051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T20:10:15.129096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade received MsgVoteResp from ac98865638e77ade at term 2"}
	{"level":"info","ts":"2024-09-06T20:10:15.129128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac98865638e77ade became leader at term 2"}
	{"level":"info","ts":"2024-09-06T20:10:15.129207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ac98865638e77ade elected leader ac98865638e77ade at term 2"}
	{"level":"info","ts":"2024-09-06T20:10:15.133830Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ac98865638e77ade","local-member-attributes":"{Name:no-preload-504385 ClientURLs:[https://192.168.61.184:2379]}","request-path":"/0/members/ac98865638e77ade/attributes","cluster-id":"fd8e42ce2aaa20da","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T20:10:15.134232Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:10:15.134630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:10:15.135099Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T20:10:15.135920Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:10:15.140930Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.184:2379"}
	{"level":"info","ts":"2024-09-06T20:10:15.141467Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T20:10:15.142248Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T20:10:15.145664Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T20:10:15.145699Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T20:10:15.146126Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fd8e42ce2aaa20da","local-member-id":"ac98865638e77ade","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:10:15.146241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:10:15.146288Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T20:20:15.414418Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":721}
	{"level":"info","ts":"2024-09-06T20:20:15.425344Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":721,"took":"10.346013ms","hash":3972606158,"current-db-size-bytes":2277376,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2277376,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-09-06T20:20:15.425450Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3972606158,"revision":721,"compact-revision":-1}
	
	
	==> kernel <==
	 20:24:54 up 20 min,  0 users,  load average: 0.37, 0.21, 0.17
	Linux no-preload-504385 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6c5378fe314f7a6ccad157d8c7e69480e9a035d341e61e570c71a186f8d71d64] <==
	W0906 20:10:07.680379       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.682977       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.687317       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.745308       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.761145       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.781840       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.812755       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.854001       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.858552       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.884550       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:07.906310       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.014751       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.086990       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.145024       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.151620       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.195820       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.293901       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.395224       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.529315       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.537016       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.552976       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.694069       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.755118       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.756473       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0906 20:10:08.901050       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c171d8f525af6029e27b4f097742a4573016670bf522f208c75d45ccf03ceb4b] <==
	W0906 20:20:17.843357       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:20:17.843452       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:20:17.844644       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:20:17.844665       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:21:17.844895       1 handler_proxy.go:99] no RequestInfo found in the context
	W0906 20:21:17.844989       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:21:17.845192       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0906 20:21:17.845238       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:21:17.846352       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:21:17.846419       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0906 20:23:17.846655       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:23:17.846781       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0906 20:23:17.846687       1 handler_proxy.go:99] no RequestInfo found in the context
	E0906 20:23:17.846896       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0906 20:23:17.848144       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0906 20:23:17.848192       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [57091fe8c0c738bd8713e9b00e9d6f28c32efebf7704702322ad06f77e17f32b] <==
	I0906 20:19:24.332870       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:19:53.800164       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:19:54.341447       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:20:23.807029       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:20:24.355987       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:20:43.069092       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-504385"
	E0906 20:20:53.814487       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:20:54.366459       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:21:18.689418       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="252.413µs"
	E0906 20:21:23.821716       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:21:24.374209       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0906 20:21:32.687127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="56.771µs"
	E0906 20:21:53.828982       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:21:54.382726       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:22:23.837222       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:22:24.391918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:22:53.844456       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:22:54.399926       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:23:23.852261       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:23:24.408225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:23:53.859541       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:23:54.417078       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:24:23.867539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0906 20:24:24.424757       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0906 20:24:53.875421       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [6c5b197d3d52688957a63cce3ecf015b9d869dd995d73cfe8686595ce6ef51df] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0906 20:10:25.275365       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0906 20:10:25.297382       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.184"]
	E0906 20:10:25.297471       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 20:10:25.397537       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0906 20:10:25.397632       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0906 20:10:25.397662       1 server_linux.go:169] "Using iptables Proxier"
	I0906 20:10:25.409357       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 20:10:25.409683       1 server.go:483] "Version info" version="v1.31.0"
	I0906 20:10:25.409711       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 20:10:25.416825       1 config.go:197] "Starting service config controller"
	I0906 20:10:25.416882       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 20:10:25.417347       1 config.go:104] "Starting endpoint slice config controller"
	I0906 20:10:25.417355       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 20:10:25.417935       1 config.go:326] "Starting node config controller"
	I0906 20:10:25.417943       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 20:10:25.517171       1 shared_informer.go:320] Caches are synced for service config
	I0906 20:10:25.518345       1 shared_informer.go:320] Caches are synced for node config
	I0906 20:10:25.518373       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3f08497dae4ccb599defc3157bd84e508d31375ba472a2bb28419103113ff131] <==
	W0906 20:10:16.864847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 20:10:16.864952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:16.865141       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 20:10:16.865209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.759264       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 20:10:17.759327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.816305       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 20:10:17.816446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.826760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 20:10:17.826930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.837516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 20:10:17.837641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.860367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 20:10:17.860417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.861355       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 20:10:17.861400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:17.969765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 20:10:17.969818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:18.110784       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 20:10:18.110840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:18.123112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 20:10:18.123166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 20:10:18.390987       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 20:10:18.391052       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 20:10:21.253224       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 20:23:43 no-preload-504385 kubelet[3393]: E0906 20:23:43.673337    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:23:49 no-preload-504385 kubelet[3393]: E0906 20:23:49.860235    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654229859802170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:23:49 no-preload-504385 kubelet[3393]: E0906 20:23:49.860281    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654229859802170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:23:55 no-preload-504385 kubelet[3393]: E0906 20:23:55.672369    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:23:59 no-preload-504385 kubelet[3393]: E0906 20:23:59.862545    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654239861997798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:23:59 no-preload-504385 kubelet[3393]: E0906 20:23:59.862952    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654239861997798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:09 no-preload-504385 kubelet[3393]: E0906 20:24:09.673108    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:24:09 no-preload-504385 kubelet[3393]: E0906 20:24:09.865219    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654249864834222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:09 no-preload-504385 kubelet[3393]: E0906 20:24:09.865462    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654249864834222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:19 no-preload-504385 kubelet[3393]: E0906 20:24:19.716016    3393 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 06 20:24:19 no-preload-504385 kubelet[3393]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 06 20:24:19 no-preload-504385 kubelet[3393]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 06 20:24:19 no-preload-504385 kubelet[3393]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 06 20:24:19 no-preload-504385 kubelet[3393]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 06 20:24:19 no-preload-504385 kubelet[3393]: E0906 20:24:19.868135    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654259867699225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:19 no-preload-504385 kubelet[3393]: E0906 20:24:19.868164    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654259867699225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:21 no-preload-504385 kubelet[3393]: E0906 20:24:21.673208    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:24:29 no-preload-504385 kubelet[3393]: E0906 20:24:29.870368    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654269869967506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:29 no-preload-504385 kubelet[3393]: E0906 20:24:29.870951    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654269869967506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:33 no-preload-504385 kubelet[3393]: E0906 20:24:33.672558    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:24:39 no-preload-504385 kubelet[3393]: E0906 20:24:39.872850    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654279872396357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:39 no-preload-504385 kubelet[3393]: E0906 20:24:39.872900    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654279872396357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:47 no-preload-504385 kubelet[3393]: E0906 20:24:47.671637    3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-56mkl" podUID="73747864-24bf-42d0-956b-6047a52ed887"
	Sep 06 20:24:49 no-preload-504385 kubelet[3393]: E0906 20:24:49.874411    3393 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654289874063514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 06 20:24:49 no-preload-504385 kubelet[3393]: E0906 20:24:49.874865    3393 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654289874063514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [15274b39b451abccf3886a126ddfe07922152942a03e462584d3eee39a2c0f3d] <==
	I0906 20:10:26.721647       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 20:10:26.766335       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 20:10:26.766412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 20:10:26.796012       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 20:10:26.797016       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-504385_23ec0dd0-de12-4a78-9abb-d40c60f17bb6!
	I0906 20:10:26.815342       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"689a60c8-594d-47dc-950c-39275506564f", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-504385_23ec0dd0-de12-4a78-9abb-d40c60f17bb6 became leader
	I0906 20:10:26.897747       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-504385_23ec0dd0-de12-4a78-9abb-d40c60f17bb6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-504385 -n no-preload-504385
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-504385 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-56mkl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-504385 describe pod metrics-server-6867b74b74-56mkl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-504385 describe pod metrics-server-6867b74b74-56mkl: exit status 1 (63.869843ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-56mkl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-504385 describe pod metrics-server-6867b74b74-56mkl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (315.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (144.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:22:53.378226   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:23:34.030963   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/calico-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:23:58.211481   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0906 20:24:23.362635   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 2 (224.973994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-843298" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-843298 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-843298 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.847µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-843298 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 2 (228.114988ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-843298 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-843298 logs -n 25: (1.567871479s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-603826 sudo cat                              | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo                                  | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo find                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-603826 sudo crio                             | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-603826                                       | bridge-603826                | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	| delete  | -p                                                     | disable-driver-mounts-859361 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | disable-driver-mounts-859361                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:57 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-504385             | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-458066            | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC | 06 Sep 24 19:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-653828  | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC | 06 Sep 24 19:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:57 UTC |                     |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-504385                  | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-458066                 | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-504385                                   | no-preload-504385            | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-458066                                  | embed-certs-458066           | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-843298        | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653828       | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653828 | jenkins | v1.34.0 | 06 Sep 24 19:59 UTC | 06 Sep 24 20:09 UTC |
	|         | default-k8s-diff-port-653828                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-843298             | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC | 06 Sep 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-843298                              | old-k8s-version-843298       | jenkins | v1.34.0 | 06 Sep 24 20:00 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 20:00:55
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 20:00:55.455816   73230 out.go:345] Setting OutFile to fd 1 ...
	I0906 20:00:55.455933   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.455943   73230 out.go:358] Setting ErrFile to fd 2...
	I0906 20:00:55.455951   73230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 20:00:55.456141   73230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 20:00:55.456685   73230 out.go:352] Setting JSON to false
	I0906 20:00:55.457698   73230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6204,"bootTime":1725646651,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 20:00:55.457762   73230 start.go:139] virtualization: kvm guest
	I0906 20:00:55.459863   73230 out.go:177] * [old-k8s-version-843298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 20:00:55.461119   73230 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 20:00:55.461167   73230 notify.go:220] Checking for updates...
	I0906 20:00:55.463398   73230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 20:00:55.464573   73230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:00:55.465566   73230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 20:00:55.466605   73230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 20:00:55.467834   73230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 20:00:55.469512   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:00:55.470129   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.470183   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.484881   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46595
	I0906 20:00:55.485238   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.485752   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.485776   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.486108   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.486296   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.488175   73230 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0906 20:00:55.489359   73230 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 20:00:55.489671   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:00:55.489705   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:00:55.504589   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0906 20:00:55.505047   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:00:55.505557   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:00:55.505581   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:00:55.505867   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:00:55.506018   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:00:55.541116   73230 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 20:00:55.542402   73230 start.go:297] selected driver: kvm2
	I0906 20:00:55.542423   73230 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-8
43298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.542548   73230 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 20:00:55.543192   73230 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.543257   73230 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 20:00:55.558465   73230 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 20:00:55.558833   73230 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:00:55.558865   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:00:55.558875   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:00:55.558908   73230 start.go:340] cluster config:
	{Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:00:55.559011   73230 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 20:00:55.561521   73230 out.go:177] * Starting "old-k8s-version-843298" primary control-plane node in "old-k8s-version-843298" cluster
	I0906 20:00:55.309027   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:58.377096   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:00:55.562714   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:00:55.562760   73230 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 20:00:55.562773   73230 cache.go:56] Caching tarball of preloaded images
	I0906 20:00:55.562856   73230 preload.go:172] Found /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0906 20:00:55.562868   73230 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0906 20:00:55.562977   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:00:55.563173   73230 start.go:360] acquireMachinesLock for old-k8s-version-843298: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:01:04.457122   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:07.529093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:13.609120   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:16.681107   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:22.761164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:25.833123   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:31.913167   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:34.985108   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:41.065140   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:44.137176   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:50.217162   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:53.289137   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:01:59.369093   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:02.441171   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:08.521164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:11.593164   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:17.673124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:20.745159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:26.825154   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:29.897211   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:35.977181   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:39.049161   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:45.129172   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:48.201208   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:54.281103   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:02:57.353175   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:03.433105   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:06.505124   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:12.585121   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:15.657169   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:21.737151   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:24.809135   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:30.889180   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:33.961145   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:40.041159   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:43.113084   72322 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.184:22: connect: no route to host
	I0906 20:03:46.117237   72441 start.go:364] duration metric: took 4m28.485189545s to acquireMachinesLock for "embed-certs-458066"
	I0906 20:03:46.117298   72441 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:03:46.117309   72441 fix.go:54] fixHost starting: 
	I0906 20:03:46.117737   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:03:46.117773   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:03:46.132573   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0906 20:03:46.133029   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:03:46.133712   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:03:46.133743   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:03:46.134097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:03:46.134322   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:03:46.134505   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:03:46.136291   72441 fix.go:112] recreateIfNeeded on embed-certs-458066: state=Stopped err=<nil>
	I0906 20:03:46.136313   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	W0906 20:03:46.136466   72441 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:03:46.138544   72441 out.go:177] * Restarting existing kvm2 VM for "embed-certs-458066" ...
	I0906 20:03:46.139833   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Start
	I0906 20:03:46.140001   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring networks are active...
	I0906 20:03:46.140754   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network default is active
	I0906 20:03:46.141087   72441 main.go:141] libmachine: (embed-certs-458066) Ensuring network mk-embed-certs-458066 is active
	I0906 20:03:46.141402   72441 main.go:141] libmachine: (embed-certs-458066) Getting domain xml...
	I0906 20:03:46.142202   72441 main.go:141] libmachine: (embed-certs-458066) Creating domain...
	I0906 20:03:47.351460   72441 main.go:141] libmachine: (embed-certs-458066) Waiting to get IP...
	I0906 20:03:47.352248   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.352628   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.352699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.352597   73827 retry.go:31] will retry after 202.870091ms: waiting for machine to come up
	I0906 20:03:46.114675   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:03:46.114711   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115092   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:03:46.115118   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:03:46.115306   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:03:46.117092   72322 machine.go:96] duration metric: took 4m37.429712277s to provisionDockerMachine
	I0906 20:03:46.117135   72322 fix.go:56] duration metric: took 4m37.451419912s for fixHost
	I0906 20:03:46.117144   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 4m37.45145595s
	W0906 20:03:46.117167   72322 start.go:714] error starting host: provision: host is not running
	W0906 20:03:46.117242   72322 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0906 20:03:46.117252   72322 start.go:729] Will try again in 5 seconds ...
	I0906 20:03:47.557228   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.557656   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.557682   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.557606   73827 retry.go:31] will retry after 357.664781ms: waiting for machine to come up
	I0906 20:03:47.917575   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:47.918041   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:47.918068   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:47.918005   73827 retry.go:31] will retry after 338.480268ms: waiting for machine to come up
	I0906 20:03:48.258631   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.259269   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.259305   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.259229   73827 retry.go:31] will retry after 554.173344ms: waiting for machine to come up
	I0906 20:03:48.814947   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:48.815491   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:48.815523   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:48.815449   73827 retry.go:31] will retry after 601.029419ms: waiting for machine to come up
	I0906 20:03:49.418253   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:49.418596   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:49.418623   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:49.418548   73827 retry.go:31] will retry after 656.451458ms: waiting for machine to come up
	I0906 20:03:50.076488   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:50.076908   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:50.076928   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:50.076875   73827 retry.go:31] will retry after 1.13800205s: waiting for machine to come up
	I0906 20:03:51.216380   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:51.216801   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:51.216831   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:51.216758   73827 retry.go:31] will retry after 1.071685673s: waiting for machine to come up
	I0906 20:03:52.289760   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:52.290174   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:52.290202   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:52.290125   73827 retry.go:31] will retry after 1.581761127s: waiting for machine to come up
	I0906 20:03:51.119269   72322 start.go:360] acquireMachinesLock for no-preload-504385: {Name:mke525adc748d173f02ea523120da3d310b4505f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0906 20:03:53.873755   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:53.874150   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:53.874184   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:53.874120   73827 retry.go:31] will retry after 1.99280278s: waiting for machine to come up
	I0906 20:03:55.869267   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:55.869747   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:55.869776   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:55.869685   73827 retry.go:31] will retry after 2.721589526s: waiting for machine to come up
	I0906 20:03:58.594012   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:03:58.594402   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:03:58.594428   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:03:58.594354   73827 retry.go:31] will retry after 2.763858077s: waiting for machine to come up
	I0906 20:04:01.359424   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:01.359775   72441 main.go:141] libmachine: (embed-certs-458066) DBG | unable to find current IP address of domain embed-certs-458066 in network mk-embed-certs-458066
	I0906 20:04:01.359809   72441 main.go:141] libmachine: (embed-certs-458066) DBG | I0906 20:04:01.359736   73827 retry.go:31] will retry after 3.822567166s: waiting for machine to come up
	I0906 20:04:06.669858   72867 start.go:364] duration metric: took 4m9.363403512s to acquireMachinesLock for "default-k8s-diff-port-653828"
	I0906 20:04:06.669929   72867 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:06.669938   72867 fix.go:54] fixHost starting: 
	I0906 20:04:06.670353   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:06.670393   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:06.688290   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44215
	I0906 20:04:06.688752   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:06.689291   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:04:06.689314   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:06.689692   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:06.689886   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:06.690048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:04:06.691557   72867 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653828: state=Stopped err=<nil>
	I0906 20:04:06.691592   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	W0906 20:04:06.691742   72867 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:06.693924   72867 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653828" ...
	I0906 20:04:06.694965   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Start
	I0906 20:04:06.695148   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring networks are active...
	I0906 20:04:06.695900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network default is active
	I0906 20:04:06.696316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Ensuring network mk-default-k8s-diff-port-653828 is active
	I0906 20:04:06.696698   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Getting domain xml...
	I0906 20:04:06.697469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Creating domain...
	I0906 20:04:05.186782   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187288   72441 main.go:141] libmachine: (embed-certs-458066) Found IP for machine: 192.168.39.118
	I0906 20:04:05.187301   72441 main.go:141] libmachine: (embed-certs-458066) Reserving static IP address...
	I0906 20:04:05.187340   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has current primary IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.187764   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.187784   72441 main.go:141] libmachine: (embed-certs-458066) Reserved static IP address: 192.168.39.118
	I0906 20:04:05.187797   72441 main.go:141] libmachine: (embed-certs-458066) DBG | skip adding static IP to network mk-embed-certs-458066 - found existing host DHCP lease matching {name: "embed-certs-458066", mac: "52:54:00:ab:22:05", ip: "192.168.39.118"}
	I0906 20:04:05.187805   72441 main.go:141] libmachine: (embed-certs-458066) Waiting for SSH to be available...
	I0906 20:04:05.187848   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Getting to WaitForSSH function...
	I0906 20:04:05.190229   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190546   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.190576   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.190643   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH client type: external
	I0906 20:04:05.190679   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa (-rw-------)
	I0906 20:04:05.190714   72441 main.go:141] libmachine: (embed-certs-458066) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:05.190727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | About to run SSH command:
	I0906 20:04:05.190761   72441 main.go:141] libmachine: (embed-certs-458066) DBG | exit 0
	I0906 20:04:05.317160   72441 main.go:141] libmachine: (embed-certs-458066) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:05.317483   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetConfigRaw
	I0906 20:04:05.318089   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.320559   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.320944   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.320971   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.321225   72441 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/config.json ...
	I0906 20:04:05.321445   72441 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:05.321465   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:05.321720   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.323699   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.323972   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.324009   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.324126   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.324303   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324444   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.324561   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.324706   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.324940   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.324953   72441 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:05.437192   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:05.437217   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437479   72441 buildroot.go:166] provisioning hostname "embed-certs-458066"
	I0906 20:04:05.437495   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.437665   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.440334   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440705   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.440733   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.440925   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.441100   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441260   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.441405   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.441573   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.441733   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.441753   72441 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-458066 && echo "embed-certs-458066" | sudo tee /etc/hostname
	I0906 20:04:05.566958   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-458066
	
	I0906 20:04:05.566986   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.569652   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.569984   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.570014   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.570158   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:05.570342   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570504   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:05.570648   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:05.570838   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:05.571042   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:05.571060   72441 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-458066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-458066/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-458066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:05.689822   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:05.689855   72441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:05.689882   72441 buildroot.go:174] setting up certificates
	I0906 20:04:05.689891   72441 provision.go:84] configureAuth start
	I0906 20:04:05.689899   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetMachineName
	I0906 20:04:05.690182   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:05.692758   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693151   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.693172   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.693308   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:05.695364   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695727   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:05.695754   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:05.695909   72441 provision.go:143] copyHostCerts
	I0906 20:04:05.695957   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:05.695975   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:05.696042   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:05.696123   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:05.696130   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:05.696153   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:05.696248   72441 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:05.696257   72441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:05.696280   72441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:05.696329   72441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.embed-certs-458066 san=[127.0.0.1 192.168.39.118 embed-certs-458066 localhost minikube]
	I0906 20:04:06.015593   72441 provision.go:177] copyRemoteCerts
	I0906 20:04:06.015656   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:06.015683   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.018244   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018598   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.018630   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.018784   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.018990   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.019169   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.019278   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.110170   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0906 20:04:06.136341   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:06.161181   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:06.184758   72441 provision.go:87] duration metric: took 494.857261ms to configureAuth
	I0906 20:04:06.184786   72441 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:06.184986   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:06.185049   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.187564   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.187955   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.187978   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.188153   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.188399   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188571   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.188723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.188920   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.189070   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.189084   72441 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:06.425480   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:06.425518   72441 machine.go:96] duration metric: took 1.104058415s to provisionDockerMachine
	I0906 20:04:06.425535   72441 start.go:293] postStartSetup for "embed-certs-458066" (driver="kvm2")
	I0906 20:04:06.425548   72441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:06.425572   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.425893   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:06.425919   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.428471   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428768   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.428794   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.428928   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.429109   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.429283   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.429419   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.515180   72441 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:06.519357   72441 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:06.519390   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:06.519464   72441 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:06.519540   72441 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:06.519625   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:06.528542   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:06.552463   72441 start.go:296] duration metric: took 126.912829ms for postStartSetup
	I0906 20:04:06.552514   72441 fix.go:56] duration metric: took 20.435203853s for fixHost
	I0906 20:04:06.552540   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.554994   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555521   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.555556   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.555739   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.555937   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556095   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.556253   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.556409   72441 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:06.556600   72441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0906 20:04:06.556613   72441 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:06.669696   72441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653046.632932221
	
	I0906 20:04:06.669720   72441 fix.go:216] guest clock: 1725653046.632932221
	I0906 20:04:06.669730   72441 fix.go:229] Guest: 2024-09-06 20:04:06.632932221 +0000 UTC Remote: 2024-09-06 20:04:06.552518521 +0000 UTC m=+289.061134864 (delta=80.4137ms)
	I0906 20:04:06.669761   72441 fix.go:200] guest clock delta is within tolerance: 80.4137ms
	I0906 20:04:06.669769   72441 start.go:83] releasing machines lock for "embed-certs-458066", held for 20.552490687s
	I0906 20:04:06.669801   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.670060   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:06.673015   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673405   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.673433   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.673599   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674041   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674210   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:04:06.674304   72441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:06.674351   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.674414   72441 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:06.674437   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:04:06.676916   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677063   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677314   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677341   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677481   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:06.677503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:06.677513   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677686   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:04:06.677691   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677864   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:04:06.677878   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678013   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:04:06.678025   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.678191   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:04:06.758176   72441 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:06.782266   72441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:06.935469   72441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:06.941620   72441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:06.941680   72441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:06.957898   72441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:06.957927   72441 start.go:495] detecting cgroup driver to use...
	I0906 20:04:06.957995   72441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:06.978574   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:06.993967   72441 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:06.994035   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:07.008012   72441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:07.022073   72441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:07.133622   72441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:07.291402   72441 docker.go:233] disabling docker service ...
	I0906 20:04:07.291478   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:07.306422   72441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:07.321408   72441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:07.442256   72441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:07.564181   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:07.579777   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:07.599294   72441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:07.599361   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.610457   72441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:07.610555   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.621968   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.633527   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.645048   72441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:07.659044   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.670526   72441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.689465   72441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:07.701603   72441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:07.712085   72441 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:07.712144   72441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:07.728406   72441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:07.739888   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:07.862385   72441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:07.954721   72441 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:07.954792   72441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:07.959478   72441 start.go:563] Will wait 60s for crictl version
	I0906 20:04:07.959545   72441 ssh_runner.go:195] Run: which crictl
	I0906 20:04:07.963893   72441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:08.003841   72441 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:08.003917   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.032191   72441 ssh_runner.go:195] Run: crio --version
	I0906 20:04:08.063563   72441 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:07.961590   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting to get IP...
	I0906 20:04:07.962441   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962859   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:07.962923   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:07.962841   73982 retry.go:31] will retry after 292.508672ms: waiting for machine to come up
	I0906 20:04:08.257346   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257845   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.257867   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.257815   73982 retry.go:31] will retry after 265.967606ms: waiting for machine to come up
	I0906 20:04:08.525352   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.525907   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.525834   73982 retry.go:31] will retry after 308.991542ms: waiting for machine to come up
	I0906 20:04:08.836444   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837021   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:08.837053   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:08.836973   73982 retry.go:31] will retry after 483.982276ms: waiting for machine to come up
	I0906 20:04:09.322661   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323161   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.323184   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.323125   73982 retry.go:31] will retry after 574.860867ms: waiting for machine to come up
	I0906 20:04:09.899849   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900228   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:09.900256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:09.900187   73982 retry.go:31] will retry after 769.142372ms: waiting for machine to come up
	I0906 20:04:10.671316   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671796   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:10.671853   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:10.671771   73982 retry.go:31] will retry after 720.232224ms: waiting for machine to come up
	I0906 20:04:11.393120   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393502   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:11.393534   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:11.393447   73982 retry.go:31] will retry after 975.812471ms: waiting for machine to come up
	I0906 20:04:08.064907   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetIP
	I0906 20:04:08.067962   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068410   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:04:08.068442   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:04:08.068626   72441 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:08.072891   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:08.086275   72441 kubeadm.go:883] updating cluster {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:08.086383   72441 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:08.086423   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:08.123100   72441 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:08.123158   72441 ssh_runner.go:195] Run: which lz4
	I0906 20:04:08.127330   72441 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:08.131431   72441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:08.131466   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:09.584066   72441 crio.go:462] duration metric: took 1.456765631s to copy over tarball
	I0906 20:04:09.584131   72441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:11.751911   72441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.167751997s)
	I0906 20:04:11.751949   72441 crio.go:469] duration metric: took 2.167848466s to extract the tarball
	I0906 20:04:11.751959   72441 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:11.790385   72441 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:11.831973   72441 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:11.831995   72441 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:11.832003   72441 kubeadm.go:934] updating node { 192.168.39.118 8443 v1.31.0 crio true true} ...
	I0906 20:04:11.832107   72441 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-458066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:11.832166   72441 ssh_runner.go:195] Run: crio config
	I0906 20:04:11.881946   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:11.881973   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:11.882000   72441 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:11.882028   72441 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-458066 NodeName:embed-certs-458066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:11.882186   72441 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-458066"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:11.882266   72441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:11.892537   72441 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:11.892617   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:11.902278   72441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0906 20:04:11.920451   72441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:11.938153   72441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0906 20:04:11.957510   72441 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:11.961364   72441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.118	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:11.973944   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:12.109677   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:12.126348   72441 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066 for IP: 192.168.39.118
	I0906 20:04:12.126378   72441 certs.go:194] generating shared ca certs ...
	I0906 20:04:12.126399   72441 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:12.126562   72441 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:12.126628   72441 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:12.126642   72441 certs.go:256] generating profile certs ...
	I0906 20:04:12.126751   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/client.key
	I0906 20:04:12.126843   72441 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key.c10a03b1
	I0906 20:04:12.126904   72441 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key
	I0906 20:04:12.127063   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:12.127111   72441 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:12.127123   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:12.127153   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:12.127189   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:12.127218   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:12.127268   72441 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:12.128117   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:12.185978   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:12.218124   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:12.254546   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:12.290098   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0906 20:04:12.317923   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:12.341186   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:12.363961   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/embed-certs-458066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 20:04:12.388000   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:12.418618   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:12.442213   72441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:12.465894   72441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:12.482404   72441 ssh_runner.go:195] Run: openssl version
	I0906 20:04:12.488370   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:12.499952   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504565   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.504619   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:12.510625   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:12.522202   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:12.370306   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370743   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:12.370779   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:12.370688   73982 retry.go:31] will retry after 1.559820467s: waiting for machine to come up
	I0906 20:04:13.932455   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933042   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:13.933072   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:13.932985   73982 retry.go:31] will retry after 1.968766852s: waiting for machine to come up
	I0906 20:04:15.903304   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903826   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:15.903855   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:15.903775   73982 retry.go:31] will retry after 2.738478611s: waiting for machine to come up
	I0906 20:04:12.533501   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538229   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.538284   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:12.544065   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:12.555220   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:12.566402   72441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571038   72441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.571093   72441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:12.577057   72441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:12.588056   72441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:12.592538   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:12.598591   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:12.604398   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:12.610502   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:12.616513   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:12.622859   72441 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:12.628975   72441 kubeadm.go:392] StartCluster: {Name:embed-certs-458066 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-458066 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:12.629103   72441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:12.629154   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.667699   72441 cri.go:89] found id: ""
	I0906 20:04:12.667764   72441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:12.678070   72441 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:12.678092   72441 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:12.678148   72441 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:12.687906   72441 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:12.688889   72441 kubeconfig.go:125] found "embed-certs-458066" server: "https://192.168.39.118:8443"
	I0906 20:04:12.690658   72441 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:12.700591   72441 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.118
	I0906 20:04:12.700623   72441 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:12.700635   72441 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:12.700675   72441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:12.741471   72441 cri.go:89] found id: ""
	I0906 20:04:12.741553   72441 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:12.757877   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:12.767729   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:12.767748   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:12.767800   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:12.777094   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:12.777157   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:12.786356   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:12.795414   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:12.795470   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:12.804727   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.813481   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:12.813534   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:12.822844   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:12.831877   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:12.831930   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:12.841082   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:12.850560   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:12.975888   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:13.850754   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.064392   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.140680   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:14.239317   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:14.239411   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:14.740313   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.240388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.740388   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:15.755429   72441 api_server.go:72] duration metric: took 1.516111342s to wait for apiserver process to appear ...
	I0906 20:04:15.755462   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:15.755483   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.544772   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.544807   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.544824   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.596487   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:18.596546   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:18.755752   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:18.761917   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:18.761946   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.256512   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.265937   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.265973   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:19.756568   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:19.763581   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:19.763606   72441 api_server.go:103] status: https://192.168.39.118:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:20.256237   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:04:20.262036   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:04:20.268339   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:20.268364   72441 api_server.go:131] duration metric: took 4.512894792s to wait for apiserver health ...
	I0906 20:04:20.268372   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:04:20.268378   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:20.270262   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:18.644597   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645056   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:18.645088   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:18.644992   73982 retry.go:31] will retry after 2.982517528s: waiting for machine to come up
	I0906 20:04:21.631028   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631392   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | unable to find current IP address of domain default-k8s-diff-port-653828 in network mk-default-k8s-diff-port-653828
	I0906 20:04:21.631414   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | I0906 20:04:21.631367   73982 retry.go:31] will retry after 3.639469531s: waiting for machine to come up
	I0906 20:04:20.271474   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:20.282996   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:20.303957   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:20.315560   72441 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:20.315602   72441 system_pods.go:61] "coredns-6f6b679f8f-v6z7z" [b2c18dba-1210-4e95-a705-95abceca92f5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:20.315611   72441 system_pods.go:61] "etcd-embed-certs-458066" [cf60e7c7-1801-42c7-be25-85242c22a5d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:20.315619   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [48c684ec-f93f-49ec-868b-6e7bc20ad506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:20.315625   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [1d55b520-2d8f-4517-a491-8193eaff5d89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:20.315631   72441 system_pods.go:61] "kube-proxy-crvq7" [f0610684-81ee-426a-adc2-aea80faab822] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:20.315639   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [d8744325-58f2-43a8-9a93-516b5a6fb989] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:20.315644   72441 system_pods.go:61] "metrics-server-6867b74b74-gtg94" [600e9c90-20db-407e-b586-fae3809d87b6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:20.315649   72441 system_pods.go:61] "storage-provisioner" [1efe7188-2d33-4a29-afbe-823adbef73b3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:20.315657   72441 system_pods.go:74] duration metric: took 11.674655ms to wait for pod list to return data ...
	I0906 20:04:20.315665   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:20.318987   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:20.319012   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:20.319023   72441 node_conditions.go:105] duration metric: took 3.354197ms to run NodePressure ...
	I0906 20:04:20.319038   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:20.600925   72441 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607562   72441 kubeadm.go:739] kubelet initialised
	I0906 20:04:20.607590   72441 kubeadm.go:740] duration metric: took 6.637719ms waiting for restarted kubelet to initialise ...
	I0906 20:04:20.607602   72441 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:20.611592   72441 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:26.558023   73230 start.go:364] duration metric: took 3m30.994815351s to acquireMachinesLock for "old-k8s-version-843298"
	I0906 20:04:26.558087   73230 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:26.558096   73230 fix.go:54] fixHost starting: 
	I0906 20:04:26.558491   73230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:26.558542   73230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:26.576511   73230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0906 20:04:26.576933   73230 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:26.577434   73230 main.go:141] libmachine: Using API Version  1
	I0906 20:04:26.577460   73230 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:26.577794   73230 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:26.577968   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:26.578128   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetState
	I0906 20:04:26.579640   73230 fix.go:112] recreateIfNeeded on old-k8s-version-843298: state=Stopped err=<nil>
	I0906 20:04:26.579674   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	W0906 20:04:26.579829   73230 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:26.581843   73230 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-843298" ...
	I0906 20:04:25.275406   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275902   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Found IP for machine: 192.168.50.16
	I0906 20:04:25.275942   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has current primary IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.275955   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserving static IP address...
	I0906 20:04:25.276431   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.276463   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Reserved static IP address: 192.168.50.16
	I0906 20:04:25.276482   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | skip adding static IP to network mk-default-k8s-diff-port-653828 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653828", mac: "52:54:00:0a:b1:87", ip: "192.168.50.16"}
	I0906 20:04:25.276493   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Waiting for SSH to be available...
	I0906 20:04:25.276512   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Getting to WaitForSSH function...
	I0906 20:04:25.278727   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279006   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.279037   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.279196   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH client type: external
	I0906 20:04:25.279234   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa (-rw-------)
	I0906 20:04:25.279289   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:25.279312   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | About to run SSH command:
	I0906 20:04:25.279330   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | exit 0
	I0906 20:04:25.405134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:25.405524   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetConfigRaw
	I0906 20:04:25.406134   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.408667   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409044   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.409074   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.409332   72867 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/config.json ...
	I0906 20:04:25.409513   72867 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:25.409530   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:25.409724   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.411737   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412027   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.412060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.412171   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.412362   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412489   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.412662   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.412802   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.413045   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.413059   72867 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:25.513313   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:25.513343   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513613   72867 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653828"
	I0906 20:04:25.513644   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.513851   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.516515   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.516847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.516895   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.517116   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.517300   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517461   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.517574   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.517712   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.517891   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.517905   72867 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653828 && echo "default-k8s-diff-port-653828" | sudo tee /etc/hostname
	I0906 20:04:25.637660   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653828
	
	I0906 20:04:25.637691   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.640258   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640600   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.640626   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.640811   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.641001   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641177   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.641333   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.641524   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:25.641732   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:25.641754   72867 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653828/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:25.749746   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:25.749773   72867 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:25.749795   72867 buildroot.go:174] setting up certificates
	I0906 20:04:25.749812   72867 provision.go:84] configureAuth start
	I0906 20:04:25.749828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetMachineName
	I0906 20:04:25.750111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:25.752528   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.752893   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.752920   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.753104   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.755350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755642   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.755666   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.755808   72867 provision.go:143] copyHostCerts
	I0906 20:04:25.755858   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:25.755875   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:25.755930   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:25.756017   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:25.756024   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:25.756046   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:25.756129   72867 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:25.756137   72867 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:25.756155   72867 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:25.756212   72867 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653828 san=[127.0.0.1 192.168.50.16 default-k8s-diff-port-653828 localhost minikube]
	I0906 20:04:25.934931   72867 provision.go:177] copyRemoteCerts
	I0906 20:04:25.935018   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:25.935060   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:25.937539   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.937899   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:25.937925   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:25.938111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:25.938308   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:25.938469   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:25.938644   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.019666   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:26.043989   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0906 20:04:26.066845   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 20:04:26.090526   72867 provision.go:87] duration metric: took 340.698646ms to configureAuth
	I0906 20:04:26.090561   72867 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:26.090786   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:04:26.090878   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.093783   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094167   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.094201   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.094503   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.094689   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094850   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.094975   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.095130   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.095357   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.095389   72867 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:26.324270   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:26.324301   72867 machine.go:96] duration metric: took 914.775498ms to provisionDockerMachine
	I0906 20:04:26.324315   72867 start.go:293] postStartSetup for "default-k8s-diff-port-653828" (driver="kvm2")
	I0906 20:04:26.324328   72867 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:26.324350   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.324726   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:26.324759   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.327339   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327718   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.327750   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.327943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.328147   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.328309   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.328449   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.408475   72867 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:26.413005   72867 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:26.413033   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:26.413107   72867 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:26.413203   72867 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:26.413320   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:26.422811   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:26.449737   72867 start.go:296] duration metric: took 125.408167ms for postStartSetup
	I0906 20:04:26.449772   72867 fix.go:56] duration metric: took 19.779834553s for fixHost
	I0906 20:04:26.449792   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.452589   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.452990   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.453022   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.453323   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.453529   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453710   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.453847   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.453966   72867 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:26.454125   72867 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.16 22 <nil> <nil>}
	I0906 20:04:26.454136   72867 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:26.557844   72867 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653066.531604649
	
	I0906 20:04:26.557875   72867 fix.go:216] guest clock: 1725653066.531604649
	I0906 20:04:26.557884   72867 fix.go:229] Guest: 2024-09-06 20:04:26.531604649 +0000 UTC Remote: 2024-09-06 20:04:26.449775454 +0000 UTC m=+269.281822801 (delta=81.829195ms)
	I0906 20:04:26.557904   72867 fix.go:200] guest clock delta is within tolerance: 81.829195ms
	I0906 20:04:26.557909   72867 start.go:83] releasing machines lock for "default-k8s-diff-port-653828", held for 19.888002519s
	I0906 20:04:26.557943   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.558256   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:26.561285   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561705   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.561732   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.561900   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562425   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562628   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:04:26.562732   72867 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:26.562782   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.562920   72867 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:26.562950   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:04:26.565587   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.565970   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566048   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566149   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566331   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.566542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.566605   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:26.566633   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:26.566744   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.566756   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:04:26.566992   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:04:26.567145   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:04:26.567302   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:04:26.672529   72867 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:26.678762   72867 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:26.825625   72867 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:26.832290   72867 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:26.832363   72867 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:26.848802   72867 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:26.848824   72867 start.go:495] detecting cgroup driver to use...
	I0906 20:04:26.848917   72867 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:26.864986   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:26.878760   72867 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:26.878813   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:26.893329   72867 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:26.909090   72867 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:27.025534   72867 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:27.190190   72867 docker.go:233] disabling docker service ...
	I0906 20:04:27.190293   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:22.617468   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:24.618561   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.118448   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:27.204700   72867 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:27.217880   72867 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:27.346599   72867 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:27.466601   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:27.480785   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:27.501461   72867 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:04:27.501523   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.511815   72867 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:27.511868   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.521806   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.532236   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.542227   72867 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:27.552389   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.563462   72867 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.583365   72867 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:27.594465   72867 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:27.605074   72867 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:27.605140   72867 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:27.618702   72867 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:27.630566   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:27.748387   72867 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:27.841568   72867 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:27.841652   72867 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:27.846880   72867 start.go:563] Will wait 60s for crictl version
	I0906 20:04:27.846936   72867 ssh_runner.go:195] Run: which crictl
	I0906 20:04:27.851177   72867 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:27.895225   72867 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:27.895327   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.934388   72867 ssh_runner.go:195] Run: crio --version
	I0906 20:04:27.966933   72867 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:04:26.583194   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .Start
	I0906 20:04:26.583341   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring networks are active...
	I0906 20:04:26.584046   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network default is active
	I0906 20:04:26.584420   73230 main.go:141] libmachine: (old-k8s-version-843298) Ensuring network mk-old-k8s-version-843298 is active
	I0906 20:04:26.584851   73230 main.go:141] libmachine: (old-k8s-version-843298) Getting domain xml...
	I0906 20:04:26.585528   73230 main.go:141] libmachine: (old-k8s-version-843298) Creating domain...
	I0906 20:04:27.874281   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting to get IP...
	I0906 20:04:27.875189   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:27.875762   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:27.875844   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:27.875754   74166 retry.go:31] will retry after 289.364241ms: waiting for machine to come up
	I0906 20:04:28.166932   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.167349   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.167375   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.167303   74166 retry.go:31] will retry after 317.106382ms: waiting for machine to come up
	I0906 20:04:28.485664   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.486147   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.486241   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.486199   74166 retry.go:31] will retry after 401.712201ms: waiting for machine to come up
	I0906 20:04:28.890039   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:28.890594   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:28.890621   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:28.890540   74166 retry.go:31] will retry after 570.418407ms: waiting for machine to come up
	I0906 20:04:29.462983   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:29.463463   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:29.463489   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:29.463428   74166 retry.go:31] will retry after 696.361729ms: waiting for machine to come up
	I0906 20:04:30.161305   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:30.161829   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:30.161876   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:30.161793   74166 retry.go:31] will retry after 896.800385ms: waiting for machine to come up
	I0906 20:04:27.968123   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetIP
	I0906 20:04:27.971448   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.971880   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:04:27.971904   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:04:27.972128   72867 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:27.981160   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:27.994443   72867 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653
828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:27.994575   72867 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:04:27.994635   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:28.043203   72867 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:04:28.043285   72867 ssh_runner.go:195] Run: which lz4
	I0906 20:04:28.048798   72867 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:28.053544   72867 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:28.053577   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0906 20:04:29.490070   72867 crio.go:462] duration metric: took 1.441303819s to copy over tarball
	I0906 20:04:29.490142   72867 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:31.649831   72867 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.159650072s)
	I0906 20:04:31.649870   72867 crio.go:469] duration metric: took 2.159772826s to extract the tarball
	I0906 20:04:31.649880   72867 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:31.686875   72867 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:31.729557   72867 crio.go:514] all images are preloaded for cri-o runtime.
	I0906 20:04:31.729580   72867 cache_images.go:84] Images are preloaded, skipping loading
	I0906 20:04:31.729587   72867 kubeadm.go:934] updating node { 192.168.50.16 8444 v1.31.0 crio true true} ...
	I0906 20:04:31.729698   72867 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:31.729799   72867 ssh_runner.go:195] Run: crio config
	I0906 20:04:31.777272   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:31.777299   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:31.777316   72867 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:31.777336   72867 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.16 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653828 NodeName:default-k8s-diff-port-653828 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:04:31.777509   72867 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.16
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653828"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:31.777577   72867 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:04:31.788008   72867 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:31.788070   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:31.798261   72867 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0906 20:04:31.815589   72867 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:31.832546   72867 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0906 20:04:31.849489   72867 ssh_runner.go:195] Run: grep 192.168.50.16	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:31.853452   72867 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:31.866273   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:31.984175   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:32.001110   72867 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828 for IP: 192.168.50.16
	I0906 20:04:32.001139   72867 certs.go:194] generating shared ca certs ...
	I0906 20:04:32.001160   72867 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:32.001343   72867 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:32.001399   72867 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:32.001413   72867 certs.go:256] generating profile certs ...
	I0906 20:04:32.001509   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/client.key
	I0906 20:04:32.001613   72867 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key.01951d83
	I0906 20:04:32.001665   72867 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key
	I0906 20:04:32.001815   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:32.001866   72867 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:32.001880   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:32.001913   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:32.001933   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:32.001962   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:32.002001   72867 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:32.002812   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:32.037177   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:32.078228   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:32.117445   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:32.153039   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0906 20:04:32.186458   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:04:28.120786   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:28.120826   72441 pod_ready.go:82] duration metric: took 7.509209061s for pod "coredns-6f6b679f8f-v6z7z" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:28.120842   72441 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:30.129518   72441 pod_ready.go:103] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:31.059799   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.060272   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.060294   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.060226   74166 retry.go:31] will retry after 841.627974ms: waiting for machine to come up
	I0906 20:04:31.903823   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:31.904258   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:31.904280   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:31.904238   74166 retry.go:31] will retry after 1.274018797s: waiting for machine to come up
	I0906 20:04:33.179723   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:33.180090   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:33.180133   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:33.180059   74166 retry.go:31] will retry after 1.496142841s: waiting for machine to come up
	I0906 20:04:34.678209   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:34.678697   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:34.678726   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:34.678652   74166 retry.go:31] will retry after 1.795101089s: waiting for machine to come up
	I0906 20:04:32.216815   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:32.245378   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/default-k8s-diff-port-653828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:32.272163   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:32.297017   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:32.321514   72867 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:32.345724   72867 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:32.362488   72867 ssh_runner.go:195] Run: openssl version
	I0906 20:04:32.368722   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:32.380099   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384777   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.384834   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:32.392843   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:32.405716   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:32.417043   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422074   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.422143   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:32.427946   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:32.439430   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:32.450466   72867 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455056   72867 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.455114   72867 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:32.460970   72867 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:32.471978   72867 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:32.476838   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:32.483008   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:32.489685   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:32.496446   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:32.502841   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:32.509269   72867 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:32.515687   72867 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-653828 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-653828
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:32.515791   72867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:32.515853   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.567687   72867 cri.go:89] found id: ""
	I0906 20:04:32.567763   72867 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:32.578534   72867 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:32.578552   72867 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:32.578598   72867 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:32.588700   72867 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:32.589697   72867 kubeconfig.go:125] found "default-k8s-diff-port-653828" server: "https://192.168.50.16:8444"
	I0906 20:04:32.591739   72867 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:32.601619   72867 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.16
	I0906 20:04:32.601649   72867 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:32.601659   72867 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:32.601724   72867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:32.640989   72867 cri.go:89] found id: ""
	I0906 20:04:32.641056   72867 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:32.659816   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:32.670238   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:32.670274   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:32.670327   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:04:32.679687   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:32.679778   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:32.689024   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:04:32.698403   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:32.698465   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:32.707806   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.717015   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:32.717105   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:32.726408   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:04:32.735461   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:32.735538   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:32.744701   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:32.754202   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:32.874616   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.759668   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:33.984693   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.051998   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:34.155274   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:34.155384   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:34.655749   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.156069   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.656120   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:35.672043   72867 api_server.go:72] duration metric: took 1.516769391s to wait for apiserver process to appear ...
	I0906 20:04:35.672076   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:04:35.672099   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:32.628208   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.628235   72441 pod_ready.go:82] duration metric: took 4.507383414s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.628248   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633941   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.633965   72441 pod_ready.go:82] duration metric: took 5.709738ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.633975   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639227   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.639249   72441 pod_ready.go:82] duration metric: took 5.26842ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.639259   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644664   72441 pod_ready.go:93] pod "kube-proxy-crvq7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.644690   72441 pod_ready.go:82] duration metric: took 5.423551ms for pod "kube-proxy-crvq7" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.644701   72441 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650000   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:32.650022   72441 pod_ready.go:82] duration metric: took 5.312224ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:32.650034   72441 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:34.657709   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:37.157744   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:38.092386   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.092429   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.092448   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.129071   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:04:38.129110   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:04:38.172277   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.213527   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.213573   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:38.673103   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:38.677672   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:38.677704   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.172237   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.179638   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:04:39.179670   72867 api_server.go:103] status: https://192.168.50.16:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:04:39.672801   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:04:39.678523   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:04:39.688760   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:04:39.688793   72867 api_server.go:131] duration metric: took 4.016709147s to wait for apiserver health ...
	I0906 20:04:39.688804   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:04:39.688812   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:39.690721   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:04:36.474937   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:36.475399   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:36.475497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:36.475351   74166 retry.go:31] will retry after 1.918728827s: waiting for machine to come up
	I0906 20:04:38.397024   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:38.397588   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:38.397617   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:38.397534   74166 retry.go:31] will retry after 3.460427722s: waiting for machine to come up
	I0906 20:04:39.692055   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:04:39.707875   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:04:39.728797   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:04:39.740514   72867 system_pods.go:59] 8 kube-system pods found
	I0906 20:04:39.740553   72867 system_pods.go:61] "coredns-6f6b679f8f-mvwth" [53675f76-d849-471c-9cd1-561e2f8e6499] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:04:39.740562   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [f69c9488-87d4-487e-902b-588182c2e2e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:04:39.740567   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [d641f983-776e-4102-81a3-ba3cf49911a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:04:39.740579   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [1b09e88d-b038-42d3-9c36-4eee1eff1c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:04:39.740585   72867 system_pods.go:61] "kube-proxy-9wlq4" [5254a977-ded3-439d-8db0-cd54ccd96940] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:04:39.740590   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [f8c16cf5-2c76-428f-83de-e79c49566683] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:04:39.740594   72867 system_pods.go:61] "metrics-server-6867b74b74-dds56" [6219eb1e-2904-487c-b4ed-d786a0627281] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:04:39.740598   72867 system_pods.go:61] "storage-provisioner" [58dd82cd-e250-4f57-97ad-55408f001cc3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:04:39.740605   72867 system_pods.go:74] duration metric: took 11.784722ms to wait for pod list to return data ...
	I0906 20:04:39.740614   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:04:39.745883   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:04:39.745913   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:04:39.745923   72867 node_conditions.go:105] duration metric: took 5.304169ms to run NodePressure ...
	I0906 20:04:39.745945   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:40.031444   72867 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036537   72867 kubeadm.go:739] kubelet initialised
	I0906 20:04:40.036556   72867 kubeadm.go:740] duration metric: took 5.087185ms waiting for restarted kubelet to initialise ...
	I0906 20:04:40.036563   72867 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:04:40.044926   72867 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:42.050947   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:39.657641   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:42.156327   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:41.860109   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:41.860612   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | unable to find current IP address of domain old-k8s-version-843298 in network mk-old-k8s-version-843298
	I0906 20:04:41.860640   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | I0906 20:04:41.860560   74166 retry.go:31] will retry after 4.509018672s: waiting for machine to come up
	I0906 20:04:44.051148   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.554068   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:44.157427   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:46.656559   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:47.793833   72322 start.go:364] duration metric: took 56.674519436s to acquireMachinesLock for "no-preload-504385"
	I0906 20:04:47.793890   72322 start.go:96] Skipping create...Using existing machine configuration
	I0906 20:04:47.793898   72322 fix.go:54] fixHost starting: 
	I0906 20:04:47.794329   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:04:47.794363   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:04:47.812048   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37475
	I0906 20:04:47.812496   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:04:47.813081   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:04:47.813109   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:04:47.813446   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:04:47.813741   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:04:47.813945   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:04:47.815314   72322 fix.go:112] recreateIfNeeded on no-preload-504385: state=Stopped err=<nil>
	I0906 20:04:47.815338   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	W0906 20:04:47.815507   72322 fix.go:138] unexpected machine state, will restart: <nil>
	I0906 20:04:47.817424   72322 out.go:177] * Restarting existing kvm2 VM for "no-preload-504385" ...
	I0906 20:04:47.818600   72322 main.go:141] libmachine: (no-preload-504385) Calling .Start
	I0906 20:04:47.818760   72322 main.go:141] libmachine: (no-preload-504385) Ensuring networks are active...
	I0906 20:04:47.819569   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network default is active
	I0906 20:04:47.819883   72322 main.go:141] libmachine: (no-preload-504385) Ensuring network mk-no-preload-504385 is active
	I0906 20:04:47.820233   72322 main.go:141] libmachine: (no-preload-504385) Getting domain xml...
	I0906 20:04:47.821002   72322 main.go:141] libmachine: (no-preload-504385) Creating domain...
	I0906 20:04:46.374128   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374599   73230 main.go:141] libmachine: (old-k8s-version-843298) Found IP for machine: 192.168.72.30
	I0906 20:04:46.374629   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has current primary IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.374642   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserving static IP address...
	I0906 20:04:46.375045   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.375071   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | skip adding static IP to network mk-old-k8s-version-843298 - found existing host DHCP lease matching {name: "old-k8s-version-843298", mac: "52:54:00:35:91:5e", ip: "192.168.72.30"}
	I0906 20:04:46.375081   73230 main.go:141] libmachine: (old-k8s-version-843298) Reserved static IP address: 192.168.72.30
	I0906 20:04:46.375104   73230 main.go:141] libmachine: (old-k8s-version-843298) Waiting for SSH to be available...
	I0906 20:04:46.375119   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Getting to WaitForSSH function...
	I0906 20:04:46.377497   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377836   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.377883   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.377956   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH client type: external
	I0906 20:04:46.377982   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa (-rw-------)
	I0906 20:04:46.378028   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:04:46.378044   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | About to run SSH command:
	I0906 20:04:46.378054   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | exit 0
	I0906 20:04:46.505025   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | SSH cmd err, output: <nil>: 
	I0906 20:04:46.505386   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetConfigRaw
	I0906 20:04:46.506031   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.508401   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.508787   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.508827   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.509092   73230 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/config.json ...
	I0906 20:04:46.509321   73230 machine.go:93] provisionDockerMachine start ...
	I0906 20:04:46.509339   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:46.509549   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.511816   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512230   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.512265   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.512436   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.512618   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512794   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.512932   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.513123   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.513364   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.513378   73230 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:04:46.629437   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:04:46.629469   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629712   73230 buildroot.go:166] provisioning hostname "old-k8s-version-843298"
	I0906 20:04:46.629731   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.629910   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.632226   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632620   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.632653   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.632817   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.633009   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633204   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.633364   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.633544   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.633758   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.633779   73230 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-843298 && echo "old-k8s-version-843298" | sudo tee /etc/hostname
	I0906 20:04:46.764241   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-843298
	
	I0906 20:04:46.764271   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.766678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767063   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.767092   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.767236   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:46.767414   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767591   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:46.767740   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:46.767874   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:46.768069   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:46.768088   73230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-843298' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-843298/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-843298' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:04:46.890399   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:04:46.890424   73230 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:04:46.890461   73230 buildroot.go:174] setting up certificates
	I0906 20:04:46.890471   73230 provision.go:84] configureAuth start
	I0906 20:04:46.890479   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetMachineName
	I0906 20:04:46.890714   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:46.893391   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893765   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.893802   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.893942   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:46.896173   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896505   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:46.896524   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:46.896688   73230 provision.go:143] copyHostCerts
	I0906 20:04:46.896741   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:04:46.896756   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:04:46.896814   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:04:46.896967   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:04:46.896977   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:04:46.897008   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:04:46.897096   73230 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:04:46.897104   73230 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:04:46.897133   73230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:04:46.897193   73230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-843298 san=[127.0.0.1 192.168.72.30 localhost minikube old-k8s-version-843298]
	I0906 20:04:47.128570   73230 provision.go:177] copyRemoteCerts
	I0906 20:04:47.128627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:04:47.128653   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.131548   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.131952   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.131981   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.132164   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.132396   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.132571   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.132705   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.223745   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:04:47.249671   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0906 20:04:47.274918   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:04:47.300351   73230 provision.go:87] duration metric: took 409.869395ms to configureAuth
	I0906 20:04:47.300376   73230 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:04:47.300584   73230 config.go:182] Loaded profile config "old-k8s-version-843298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 20:04:47.300673   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.303255   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303559   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.303581   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.303739   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.303943   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304098   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.304266   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.304407   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.304623   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.304644   73230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:04:47.539793   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:04:47.539824   73230 machine.go:96] duration metric: took 1.030489839s to provisionDockerMachine
	I0906 20:04:47.539836   73230 start.go:293] postStartSetup for "old-k8s-version-843298" (driver="kvm2")
	I0906 20:04:47.539849   73230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:04:47.539884   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.540193   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:04:47.540220   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.543190   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543482   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.543506   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.543707   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.543938   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.544097   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.544243   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.633100   73230 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:04:47.637336   73230 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:04:47.637368   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:04:47.637459   73230 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:04:47.637541   73230 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:04:47.637627   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:04:47.648442   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:47.672907   73230 start.go:296] duration metric: took 133.055727ms for postStartSetup
	I0906 20:04:47.672951   73230 fix.go:56] duration metric: took 21.114855209s for fixHost
	I0906 20:04:47.672978   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.675459   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.675833   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.675863   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.676005   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.676303   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676471   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.676661   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.676846   73230 main.go:141] libmachine: Using SSH client type: native
	I0906 20:04:47.677056   73230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0906 20:04:47.677070   73230 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:04:47.793647   73230 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653087.750926682
	
	I0906 20:04:47.793671   73230 fix.go:216] guest clock: 1725653087.750926682
	I0906 20:04:47.793681   73230 fix.go:229] Guest: 2024-09-06 20:04:47.750926682 +0000 UTC Remote: 2024-09-06 20:04:47.67295613 +0000 UTC m=+232.250384025 (delta=77.970552ms)
	I0906 20:04:47.793735   73230 fix.go:200] guest clock delta is within tolerance: 77.970552ms
	I0906 20:04:47.793746   73230 start.go:83] releasing machines lock for "old-k8s-version-843298", held for 21.235682628s
	I0906 20:04:47.793778   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.794059   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:47.796792   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797195   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.797229   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.797425   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798019   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798230   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .DriverName
	I0906 20:04:47.798314   73230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:04:47.798360   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.798488   73230 ssh_runner.go:195] Run: cat /version.json
	I0906 20:04:47.798509   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHHostname
	I0906 20:04:47.801253   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801632   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.801658   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801678   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.801867   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802060   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802122   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:47.802152   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:47.802210   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802318   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHPort
	I0906 20:04:47.802460   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHKeyPath
	I0906 20:04:47.802504   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.802580   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetSSHUsername
	I0906 20:04:47.802722   73230 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/old-k8s-version-843298/id_rsa Username:docker}
	I0906 20:04:47.886458   73230 ssh_runner.go:195] Run: systemctl --version
	I0906 20:04:47.910204   73230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:04:48.055661   73230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:04:48.063024   73230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:04:48.063090   73230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:04:48.084749   73230 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:04:48.084771   73230 start.go:495] detecting cgroup driver to use...
	I0906 20:04:48.084892   73230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:04:48.105494   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:04:48.123487   73230 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:04:48.123564   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:04:48.145077   73230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:04:48.161336   73230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:04:48.283568   73230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:04:48.445075   73230 docker.go:233] disabling docker service ...
	I0906 20:04:48.445146   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:04:48.461122   73230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:04:48.475713   73230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:04:48.632804   73230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:04:48.762550   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:04:48.778737   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:04:48.798465   73230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0906 20:04:48.798549   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.811449   73230 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:04:48.811523   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.824192   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.835598   73230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:04:48.847396   73230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:04:48.860005   73230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:04:48.871802   73230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:04:48.871864   73230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:04:48.887596   73230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:04:48.899508   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:49.041924   73230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:04:49.144785   73230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:04:49.144885   73230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:04:49.150404   73230 start.go:563] Will wait 60s for crictl version
	I0906 20:04:49.150461   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:49.154726   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:04:49.202450   73230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:04:49.202557   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.235790   73230 ssh_runner.go:195] Run: crio --version
	I0906 20:04:49.270094   73230 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0906 20:04:49.271457   73230 main.go:141] libmachine: (old-k8s-version-843298) Calling .GetIP
	I0906 20:04:49.274710   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275114   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:35:91:5e", ip: ""} in network mk-old-k8s-version-843298: {Iface:virbr4 ExpiryTime:2024-09-06 20:55:00 +0000 UTC Type:0 Mac:52:54:00:35:91:5e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-843298 Clientid:01:52:54:00:35:91:5e}
	I0906 20:04:49.275139   73230 main.go:141] libmachine: (old-k8s-version-843298) DBG | domain old-k8s-version-843298 has defined IP address 192.168.72.30 and MAC address 52:54:00:35:91:5e in network mk-old-k8s-version-843298
	I0906 20:04:49.275475   73230 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0906 20:04:49.280437   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:49.293664   73230 kubeadm.go:883] updating cluster {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:04:49.293793   73230 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 20:04:49.293842   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:49.348172   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:49.348251   73230 ssh_runner.go:195] Run: which lz4
	I0906 20:04:49.352703   73230 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0906 20:04:49.357463   73230 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0906 20:04:49.357501   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0906 20:04:49.056116   72867 pod_ready.go:103] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:51.553185   72867 pod_ready.go:93] pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.553217   72867 pod_ready.go:82] duration metric: took 11.508264695s for pod "coredns-6f6b679f8f-mvwth" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.553231   72867 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563758   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.563788   72867 pod_ready.go:82] duration metric: took 10.547437ms for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.563802   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570906   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:51.570940   72867 pod_ready.go:82] duration metric: took 7.128595ms for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:51.570957   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:48.657527   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:50.662561   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:49.146755   72322 main.go:141] libmachine: (no-preload-504385) Waiting to get IP...
	I0906 20:04:49.147780   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.148331   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.148406   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.148309   74322 retry.go:31] will retry after 250.314453ms: waiting for machine to come up
	I0906 20:04:49.399920   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.400386   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.400468   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.400345   74322 retry.go:31] will retry after 247.263156ms: waiting for machine to come up
	I0906 20:04:49.648894   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:49.649420   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:49.649445   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:49.649376   74322 retry.go:31] will retry after 391.564663ms: waiting for machine to come up
	I0906 20:04:50.043107   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.043594   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.043617   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.043548   74322 retry.go:31] will retry after 513.924674ms: waiting for machine to come up
	I0906 20:04:50.559145   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:50.559637   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:50.559675   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:50.559543   74322 retry.go:31] will retry after 551.166456ms: waiting for machine to come up
	I0906 20:04:51.111906   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.112967   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.112999   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.112921   74322 retry.go:31] will retry after 653.982425ms: waiting for machine to come up
	I0906 20:04:51.768950   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:51.769466   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:51.769496   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:51.769419   74322 retry.go:31] will retry after 935.670438ms: waiting for machine to come up
	I0906 20:04:52.706493   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:52.707121   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:52.707152   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:52.707062   74322 retry.go:31] will retry after 1.141487289s: waiting for machine to come up
	I0906 20:04:51.190323   73230 crio.go:462] duration metric: took 1.837657617s to copy over tarball
	I0906 20:04:51.190410   73230 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0906 20:04:54.320754   73230 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.130319477s)
	I0906 20:04:54.320778   73230 crio.go:469] duration metric: took 3.130424981s to extract the tarball
	I0906 20:04:54.320785   73230 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0906 20:04:54.388660   73230 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:04:54.427475   73230 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0906 20:04:54.427505   73230 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:04:54.427580   73230 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.427594   73230 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.427611   73230 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.427662   73230 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.427691   73230 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.427696   73230 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.427813   73230 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.427672   73230 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0906 20:04:54.429432   73230 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.429443   73230 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.429447   73230 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.429448   73230 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.429475   73230 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:54.429449   73230 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.429496   73230 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.429589   73230 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0906 20:04:54.603502   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.607745   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.610516   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.613580   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.616591   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.622381   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0906 20:04:54.636746   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.690207   73230 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0906 20:04:54.690254   73230 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.690306   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.788758   73230 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0906 20:04:54.788804   73230 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.788876   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.804173   73230 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0906 20:04:54.804228   73230 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.804273   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817005   73230 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0906 20:04:54.817056   73230 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.817074   73230 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0906 20:04:54.817101   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817122   73230 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.817138   73230 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0906 20:04:54.817167   73230 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0906 20:04:54.817202   73230 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0906 20:04:54.817213   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817220   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.817227   73230 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.817168   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817253   73230 ssh_runner.go:195] Run: which crictl
	I0906 20:04:54.817301   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:54.817333   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902264   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:54.902422   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:54.902522   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:54.902569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:54.902602   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:54.902654   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:54.902708   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.061686   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.073933   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.085364   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0906 20:04:55.085463   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.085399   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0906 20:04:55.085610   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0906 20:04:55.085725   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.192872   73230 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:04:55.196085   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0906 20:04:55.255204   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0906 20:04:55.288569   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0906 20:04:55.291461   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0906 20:04:55.291541   73230 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0906 20:04:55.291559   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0906 20:04:55.291726   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0906 20:04:53.578469   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.578504   72867 pod_ready.go:82] duration metric: took 2.007539423s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.578534   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583560   72867 pod_ready.go:93] pod "kube-proxy-9wlq4" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:53.583583   72867 pod_ready.go:82] duration metric: took 5.037068ms for pod "kube-proxy-9wlq4" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:53.583594   72867 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832422   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:04:54.832453   72867 pod_ready.go:82] duration metric: took 1.248849975s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:54.832480   72867 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	I0906 20:04:56.840031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.156842   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:55.236051   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:53.849822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:53.850213   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:53.850235   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:53.850178   74322 retry.go:31] will retry after 1.858736556s: waiting for machine to come up
	I0906 20:04:55.710052   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:55.710550   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:55.710598   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:55.710496   74322 retry.go:31] will retry after 2.033556628s: waiting for machine to come up
	I0906 20:04:57.745989   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:57.746433   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:57.746459   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:57.746388   74322 retry.go:31] will retry after 1.985648261s: waiting for machine to come up
	I0906 20:04:55.500590   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0906 20:04:55.500702   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0906 20:04:55.500740   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0906 20:04:55.500824   73230 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0906 20:04:55.500885   73230 cache_images.go:92] duration metric: took 1.07336017s to LoadCachedImages
	W0906 20:04:55.500953   73230 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0906 20:04:55.500969   73230 kubeadm.go:934] updating node { 192.168.72.30 8443 v1.20.0 crio true true} ...
	I0906 20:04:55.501112   73230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-843298 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:04:55.501192   73230 ssh_runner.go:195] Run: crio config
	I0906 20:04:55.554097   73230 cni.go:84] Creating CNI manager for ""
	I0906 20:04:55.554119   73230 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:04:55.554135   73230 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:04:55.554154   73230 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-843298 NodeName:old-k8s-version-843298 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0906 20:04:55.554359   73230 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-843298"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:04:55.554441   73230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0906 20:04:55.565923   73230 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:04:55.566004   73230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:04:55.577366   73230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0906 20:04:55.595470   73230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:04:55.614641   73230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0906 20:04:55.637739   73230 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0906 20:04:55.642233   73230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:04:55.658409   73230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:04:55.804327   73230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:04:55.824288   73230 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298 for IP: 192.168.72.30
	I0906 20:04:55.824308   73230 certs.go:194] generating shared ca certs ...
	I0906 20:04:55.824323   73230 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:55.824479   73230 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:04:55.824541   73230 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:04:55.824560   73230 certs.go:256] generating profile certs ...
	I0906 20:04:55.824680   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/client.key
	I0906 20:04:55.824755   73230 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key.f5190fa3
	I0906 20:04:55.824799   73230 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key
	I0906 20:04:55.824952   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:04:55.824995   73230 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:04:55.825008   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:04:55.825041   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:04:55.825072   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:04:55.825102   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:04:55.825158   73230 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:04:55.825878   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:04:55.868796   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:04:55.905185   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:04:55.935398   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:04:55.973373   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0906 20:04:56.008496   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0906 20:04:56.046017   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:04:56.080049   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/old-k8s-version-843298/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:04:56.122717   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:04:56.151287   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:04:56.184273   73230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:04:56.216780   73230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:04:56.239708   73230 ssh_runner.go:195] Run: openssl version
	I0906 20:04:56.246127   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:04:56.257597   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262515   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.262594   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:04:56.269207   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:04:56.281646   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:04:56.293773   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299185   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.299255   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:04:56.305740   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:04:56.319060   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:04:56.330840   73230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336013   73230 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.336082   73230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:04:56.342576   73230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:04:56.354648   73230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:04:56.359686   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:04:56.366321   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:04:56.372646   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:04:56.379199   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:04:56.386208   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:04:56.392519   73230 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:04:56.399335   73230 kubeadm.go:392] StartCluster: {Name:old-k8s-version-843298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-843298 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:04:56.399442   73230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:04:56.399495   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.441986   73230 cri.go:89] found id: ""
	I0906 20:04:56.442069   73230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:04:56.454884   73230 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:04:56.454907   73230 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:04:56.454977   73230 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:04:56.465647   73230 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:04:56.466650   73230 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-843298" does not appear in /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:04:56.467285   73230 kubeconfig.go:62] /home/jenkins/minikube-integration/19576-6021/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-843298" cluster setting kubeconfig missing "old-k8s-version-843298" context setting]
	I0906 20:04:56.468248   73230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:04:56.565587   73230 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:04:56.576221   73230 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.30
	I0906 20:04:56.576261   73230 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:04:56.576277   73230 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:04:56.576342   73230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:04:56.621597   73230 cri.go:89] found id: ""
	I0906 20:04:56.621663   73230 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:04:56.639924   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:04:56.649964   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:04:56.649989   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:04:56.650042   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:04:56.661290   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:04:56.661343   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:04:56.671361   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:04:56.680865   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:04:56.680939   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:04:56.696230   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.706613   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:04:56.706692   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:04:56.719635   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:04:56.729992   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:04:56.730045   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:04:56.740040   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:04:56.750666   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:56.891897   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.681824   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:57.972206   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.091751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:04:58.206345   73230 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:04:58.206443   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:58.707412   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.206780   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.707273   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:00.207218   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:04:59.340092   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:01.838387   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:57.658033   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:00.157741   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:04:59.734045   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:04:59.734565   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:04:59.734592   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:04:59.734506   74322 retry.go:31] will retry after 2.767491398s: waiting for machine to come up
	I0906 20:05:02.505314   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:02.505749   72322 main.go:141] libmachine: (no-preload-504385) DBG | unable to find current IP address of domain no-preload-504385 in network mk-no-preload-504385
	I0906 20:05:02.505780   72322 main.go:141] libmachine: (no-preload-504385) DBG | I0906 20:05:02.505697   74322 retry.go:31] will retry after 3.51382931s: waiting for machine to come up
	I0906 20:05:00.707010   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.206708   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:01.707125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.207349   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:02.706670   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.207287   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.706650   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.207125   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:04.707193   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:05.207119   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:03.838639   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:05.839195   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:02.655906   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:04.656677   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:07.157732   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:06.023595   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024063   72322 main.go:141] libmachine: (no-preload-504385) Found IP for machine: 192.168.61.184
	I0906 20:05:06.024095   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has current primary IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.024105   72322 main.go:141] libmachine: (no-preload-504385) Reserving static IP address...
	I0906 20:05:06.024576   72322 main.go:141] libmachine: (no-preload-504385) Reserved static IP address: 192.168.61.184
	I0906 20:05:06.024598   72322 main.go:141] libmachine: (no-preload-504385) Waiting for SSH to be available...
	I0906 20:05:06.024621   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.024643   72322 main.go:141] libmachine: (no-preload-504385) DBG | skip adding static IP to network mk-no-preload-504385 - found existing host DHCP lease matching {name: "no-preload-504385", mac: "52:54:00:4c:57:e7", ip: "192.168.61.184"}
	I0906 20:05:06.024666   72322 main.go:141] libmachine: (no-preload-504385) DBG | Getting to WaitForSSH function...
	I0906 20:05:06.026845   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027166   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.027219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.027296   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH client type: external
	I0906 20:05:06.027321   72322 main.go:141] libmachine: (no-preload-504385) DBG | Using SSH private key: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa (-rw-------)
	I0906 20:05:06.027355   72322 main.go:141] libmachine: (no-preload-504385) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0906 20:05:06.027376   72322 main.go:141] libmachine: (no-preload-504385) DBG | About to run SSH command:
	I0906 20:05:06.027403   72322 main.go:141] libmachine: (no-preload-504385) DBG | exit 0
	I0906 20:05:06.148816   72322 main.go:141] libmachine: (no-preload-504385) DBG | SSH cmd err, output: <nil>: 
	I0906 20:05:06.149196   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetConfigRaw
	I0906 20:05:06.149951   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.152588   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.152970   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.153003   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.153238   72322 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/config.json ...
	I0906 20:05:06.153485   72322 machine.go:93] provisionDockerMachine start ...
	I0906 20:05:06.153508   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:06.153714   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.156031   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156394   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.156425   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.156562   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.156732   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.156901   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.157051   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.157205   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.157411   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.157425   72322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 20:05:06.261544   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0906 20:05:06.261586   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.261861   72322 buildroot.go:166] provisioning hostname "no-preload-504385"
	I0906 20:05:06.261895   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.262063   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.264812   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265192   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.265219   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.265400   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.265570   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265705   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.265856   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.265990   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.266145   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.266157   72322 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-504385 && echo "no-preload-504385" | sudo tee /etc/hostname
	I0906 20:05:06.383428   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-504385
	
	I0906 20:05:06.383456   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.386368   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386722   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.386755   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.386968   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.387152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387322   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.387439   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.387617   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.387817   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.387840   72322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-504385' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-504385/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-504385' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 20:05:06.501805   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 20:05:06.501836   72322 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19576-6021/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-6021/.minikube}
	I0906 20:05:06.501854   72322 buildroot.go:174] setting up certificates
	I0906 20:05:06.501866   72322 provision.go:84] configureAuth start
	I0906 20:05:06.501873   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetMachineName
	I0906 20:05:06.502152   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:06.504721   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505086   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.505115   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.505250   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.507420   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507765   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.507795   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.507940   72322 provision.go:143] copyHostCerts
	I0906 20:05:06.508008   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem, removing ...
	I0906 20:05:06.508031   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem
	I0906 20:05:06.508087   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/ca.pem (1078 bytes)
	I0906 20:05:06.508175   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem, removing ...
	I0906 20:05:06.508183   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem
	I0906 20:05:06.508208   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/cert.pem (1123 bytes)
	I0906 20:05:06.508297   72322 exec_runner.go:144] found /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem, removing ...
	I0906 20:05:06.508307   72322 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem
	I0906 20:05:06.508338   72322 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-6021/.minikube/key.pem (1675 bytes)
	I0906 20:05:06.508406   72322 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem org=jenkins.no-preload-504385 san=[127.0.0.1 192.168.61.184 localhost minikube no-preload-504385]
	I0906 20:05:06.681719   72322 provision.go:177] copyRemoteCerts
	I0906 20:05:06.681786   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 20:05:06.681810   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.684460   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684779   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.684822   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.684962   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.685125   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.685258   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.685368   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:06.767422   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 20:05:06.794881   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0906 20:05:06.821701   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 20:05:06.848044   72322 provision.go:87] duration metric: took 346.1664ms to configureAuth
	I0906 20:05:06.848075   72322 buildroot.go:189] setting minikube options for container-runtime
	I0906 20:05:06.848271   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:05:06.848348   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:06.850743   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851037   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:06.851064   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:06.851226   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:06.851395   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851549   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:06.851674   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:06.851791   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:06.851993   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:06.852020   72322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0906 20:05:07.074619   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0906 20:05:07.074643   72322 machine.go:96] duration metric: took 921.143238ms to provisionDockerMachine
	I0906 20:05:07.074654   72322 start.go:293] postStartSetup for "no-preload-504385" (driver="kvm2")
	I0906 20:05:07.074664   72322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 20:05:07.074678   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.075017   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 20:05:07.075042   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.077988   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078268   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.078287   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.078449   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.078634   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.078791   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.078946   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.165046   72322 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 20:05:07.169539   72322 info.go:137] Remote host: Buildroot 2023.02.9
	I0906 20:05:07.169565   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/addons for local assets ...
	I0906 20:05:07.169631   72322 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-6021/.minikube/files for local assets ...
	I0906 20:05:07.169700   72322 filesync.go:149] local asset: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem -> 131782.pem in /etc/ssl/certs
	I0906 20:05:07.169783   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0906 20:05:07.179344   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:07.204213   72322 start.go:296] duration metric: took 129.545341ms for postStartSetup
	I0906 20:05:07.204265   72322 fix.go:56] duration metric: took 19.41036755s for fixHost
	I0906 20:05:07.204287   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.207087   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207473   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.207513   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.207695   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.207905   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208090   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.208267   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.208436   72322 main.go:141] libmachine: Using SSH client type: native
	I0906 20:05:07.208640   72322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.184 22 <nil> <nil>}
	I0906 20:05:07.208655   72322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0906 20:05:07.314172   72322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1725653107.281354639
	
	I0906 20:05:07.314195   72322 fix.go:216] guest clock: 1725653107.281354639
	I0906 20:05:07.314205   72322 fix.go:229] Guest: 2024-09-06 20:05:07.281354639 +0000 UTC Remote: 2024-09-06 20:05:07.204269406 +0000 UTC m=+358.676673749 (delta=77.085233ms)
	I0906 20:05:07.314228   72322 fix.go:200] guest clock delta is within tolerance: 77.085233ms
	I0906 20:05:07.314237   72322 start.go:83] releasing machines lock for "no-preload-504385", held for 19.52037381s
	I0906 20:05:07.314266   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.314552   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:07.317476   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.317839   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.317873   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.318003   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318542   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318716   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:05:07.318821   72322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 20:05:07.318876   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.318991   72322 ssh_runner.go:195] Run: cat /version.json
	I0906 20:05:07.319018   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:05:07.321880   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322102   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322308   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322340   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322472   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322508   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:07.322550   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:07.322685   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.322713   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:05:07.322868   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.322875   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:05:07.323062   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:05:07.323066   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.323221   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:05:07.424438   72322 ssh_runner.go:195] Run: systemctl --version
	I0906 20:05:07.430755   72322 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0906 20:05:07.579436   72322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0906 20:05:07.585425   72322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0906 20:05:07.585493   72322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 20:05:07.601437   72322 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0906 20:05:07.601462   72322 start.go:495] detecting cgroup driver to use...
	I0906 20:05:07.601529   72322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0906 20:05:07.620368   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0906 20:05:07.634848   72322 docker.go:217] disabling cri-docker service (if available) ...
	I0906 20:05:07.634912   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 20:05:07.648810   72322 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 20:05:07.664084   72322 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 20:05:07.796601   72322 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 20:05:07.974836   72322 docker.go:233] disabling docker service ...
	I0906 20:05:07.974911   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 20:05:07.989013   72322 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 20:05:08.002272   72322 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 20:05:08.121115   72322 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 20:05:08.247908   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 20:05:08.262855   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 20:05:08.281662   72322 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0906 20:05:08.281730   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.292088   72322 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0906 20:05:08.292165   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.302601   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.313143   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.323852   72322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 20:05:08.335791   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.347619   72322 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.365940   72322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0906 20:05:08.376124   72322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 20:05:08.385677   72322 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0906 20:05:08.385743   72322 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0906 20:05:08.398445   72322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 20:05:08.408477   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:08.518447   72322 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0906 20:05:08.613636   72322 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0906 20:05:08.613707   72322 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0906 20:05:08.619050   72322 start.go:563] Will wait 60s for crictl version
	I0906 20:05:08.619134   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:08.622959   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 20:05:08.668229   72322 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0906 20:05:08.668297   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.702416   72322 ssh_runner.go:195] Run: crio --version
	I0906 20:05:08.733283   72322 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0906 20:05:05.707351   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.206573   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:06.707452   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.206554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.706854   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.206925   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:08.707456   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.207200   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:09.706741   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:10.206605   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:07.839381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.839918   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:09.157889   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:11.158761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:08.734700   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetIP
	I0906 20:05:08.737126   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737477   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:05:08.737504   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:05:08.737692   72322 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0906 20:05:08.741940   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:08.756235   72322 kubeadm.go:883] updating cluster {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 20:05:08.756380   72322 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0906 20:05:08.756426   72322 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 20:05:08.798359   72322 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0906 20:05:08.798388   72322 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0906 20:05:08.798484   72322 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.798507   72322 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.798520   72322 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0906 20:05:08.798559   72322 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.798512   72322 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.798571   72322 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.798494   72322 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.798489   72322 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800044   72322 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:08.800055   72322 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.800048   72322 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0906 20:05:08.800067   72322 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.800070   72322 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:08.800043   72322 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.800046   72322 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:08.800050   72322 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.960723   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:08.967887   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:08.980496   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:08.988288   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:08.990844   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.000220   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.031002   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0906 20:05:09.046388   72322 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0906 20:05:09.046430   72322 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.046471   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.079069   72322 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0906 20:05:09.079112   72322 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.079161   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147423   72322 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0906 20:05:09.147470   72322 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.147521   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.147529   72322 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0906 20:05:09.147549   72322 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.147584   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153575   72322 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0906 20:05:09.153612   72322 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.153659   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.153662   72322 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0906 20:05:09.153697   72322 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.153736   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.272296   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.272317   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.272325   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.272368   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.272398   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.272474   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.397590   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.398793   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.398807   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.398899   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.398912   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.398969   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.515664   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0906 20:05:09.529550   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0906 20:05:09.529604   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0906 20:05:09.529762   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0906 20:05:09.532314   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0906 20:05:09.532385   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0906 20:05:09.603138   72322 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.654698   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0906 20:05:09.654823   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:09.671020   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0906 20:05:09.671069   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0906 20:05:09.671123   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0906 20:05:09.671156   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:09.671128   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.671208   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:09.686883   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0906 20:05:09.687013   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:09.709594   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0906 20:05:09.709706   72322 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0906 20:05:09.709758   72322 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:09.709858   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0906 20:05:09.709877   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709868   72322 ssh_runner.go:195] Run: which crictl
	I0906 20:05:09.709940   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0906 20:05:09.709906   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0906 20:05:09.709994   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0906 20:05:09.709771   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0906 20:05:09.709973   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0906 20:05:09.709721   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:09.714755   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0906 20:05:12.389459   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.679458658s)
	I0906 20:05:12.389498   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0906 20:05:12.389522   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389524   72322 ssh_runner.go:235] Completed: which crictl: (2.679596804s)
	I0906 20:05:12.389573   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0906 20:05:12.389582   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:10.706506   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.207411   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:11.707316   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.207239   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.706502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.206560   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:13.706593   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.207192   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:14.706940   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:15.207250   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:12.338753   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.339694   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.839193   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:13.656815   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:16.156988   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:14.349906   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.960304583s)
	I0906 20:05:14.349962   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.960364149s)
	I0906 20:05:14.349988   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:14.350001   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0906 20:05:14.350032   72322 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.350085   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0906 20:05:14.397740   72322 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:05:16.430883   72322 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.03310928s)
	I0906 20:05:16.430943   72322 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0906 20:05:16.430977   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080869318s)
	I0906 20:05:16.431004   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0906 20:05:16.431042   72322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:16.431042   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:16.431103   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0906 20:05:18.293255   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.862123731s)
	I0906 20:05:18.293274   72322 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.862211647s)
	I0906 20:05:18.293294   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0906 20:05:18.293315   72322 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0906 20:05:18.293324   72322 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:18.293372   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0906 20:05:15.706728   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.207477   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:16.707337   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.206710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:17.707209   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.206544   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.707104   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.206752   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:19.706561   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:20.206507   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:18.840176   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.339033   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:18.657074   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:21.157488   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:19.142756   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0906 20:05:19.142784   72322 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:19.142824   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0906 20:05:20.494611   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.351756729s)
	I0906 20:05:20.494642   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0906 20:05:20.494656   72322 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.494706   72322 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0906 20:05:20.706855   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.206585   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:21.706948   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.207150   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:22.706508   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.207459   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.706894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.206643   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:24.707208   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:25.206797   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:23.838561   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:25.838697   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:23.656303   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:26.156813   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:24.186953   72322 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.692203906s)
	I0906 20:05:24.186987   72322 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19576-6021/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0906 20:05:24.187019   72322 cache_images.go:123] Successfully loaded all cached images
	I0906 20:05:24.187026   72322 cache_images.go:92] duration metric: took 15.388623154s to LoadCachedImages
	I0906 20:05:24.187040   72322 kubeadm.go:934] updating node { 192.168.61.184 8443 v1.31.0 crio true true} ...
	I0906 20:05:24.187169   72322 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-504385 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 20:05:24.187251   72322 ssh_runner.go:195] Run: crio config
	I0906 20:05:24.236699   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:24.236722   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:24.236746   72322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 20:05:24.236770   72322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.184 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-504385 NodeName:no-preload-504385 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 20:05:24.236943   72322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-504385"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 20:05:24.237005   72322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 20:05:24.247480   72322 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 20:05:24.247554   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 20:05:24.257088   72322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0906 20:05:24.274447   72322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 20:05:24.292414   72322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0906 20:05:24.310990   72322 ssh_runner.go:195] Run: grep 192.168.61.184	control-plane.minikube.internal$ /etc/hosts
	I0906 20:05:24.315481   72322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 20:05:24.327268   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:05:24.465318   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:05:24.482195   72322 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385 for IP: 192.168.61.184
	I0906 20:05:24.482216   72322 certs.go:194] generating shared ca certs ...
	I0906 20:05:24.482230   72322 certs.go:226] acquiring lock for ca certs: {Name:mk6bd4100cdfbb4ea45c551d4af12536314b056b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:05:24.482364   72322 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key
	I0906 20:05:24.482407   72322 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key
	I0906 20:05:24.482420   72322 certs.go:256] generating profile certs ...
	I0906 20:05:24.482522   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/client.key
	I0906 20:05:24.482603   72322 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key.9c78613e
	I0906 20:05:24.482664   72322 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key
	I0906 20:05:24.482828   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem (1338 bytes)
	W0906 20:05:24.482878   72322 certs.go:480] ignoring /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178_empty.pem, impossibly tiny 0 bytes
	I0906 20:05:24.482894   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 20:05:24.482927   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/ca.pem (1078 bytes)
	I0906 20:05:24.482956   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/cert.pem (1123 bytes)
	I0906 20:05:24.482992   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/certs/key.pem (1675 bytes)
	I0906 20:05:24.483043   72322 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem (1708 bytes)
	I0906 20:05:24.483686   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 20:05:24.528742   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0906 20:05:24.561921   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 20:05:24.596162   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 20:05:24.636490   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0906 20:05:24.664450   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 20:05:24.690551   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 20:05:24.717308   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/no-preload-504385/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0906 20:05:24.741498   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 20:05:24.764388   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/certs/13178.pem --> /usr/share/ca-certificates/13178.pem (1338 bytes)
	I0906 20:05:24.789473   72322 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/ssl/certs/131782.pem --> /usr/share/ca-certificates/131782.pem (1708 bytes)
	I0906 20:05:24.814772   72322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 20:05:24.833405   72322 ssh_runner.go:195] Run: openssl version
	I0906 20:05:24.841007   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13178.pem && ln -fs /usr/share/ca-certificates/13178.pem /etc/ssl/certs/13178.pem"
	I0906 20:05:24.852635   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857351   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  6 18:47 /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.857404   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13178.pem
	I0906 20:05:24.863435   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13178.pem /etc/ssl/certs/51391683.0"
	I0906 20:05:24.874059   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/131782.pem && ln -fs /usr/share/ca-certificates/131782.pem /etc/ssl/certs/131782.pem"
	I0906 20:05:24.884939   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889474   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  6 18:47 /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.889567   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/131782.pem
	I0906 20:05:24.895161   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/131782.pem /etc/ssl/certs/3ec20f2e.0"
	I0906 20:05:24.905629   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 20:05:24.916101   72322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920494   72322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.920550   72322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 20:05:24.925973   72322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 20:05:24.937017   72322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 20:05:24.941834   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0906 20:05:24.947779   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0906 20:05:24.954042   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0906 20:05:24.959977   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0906 20:05:24.965500   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0906 20:05:24.970996   72322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0906 20:05:24.976532   72322 kubeadm.go:392] StartCluster: {Name:no-preload-504385 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-504385 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 20:05:24.976606   72322 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0906 20:05:24.976667   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.015556   72322 cri.go:89] found id: ""
	I0906 20:05:25.015653   72322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 20:05:25.032921   72322 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0906 20:05:25.032954   72322 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0906 20:05:25.033009   72322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0906 20:05:25.044039   72322 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0906 20:05:25.045560   72322 kubeconfig.go:125] found "no-preload-504385" server: "https://192.168.61.184:8443"
	I0906 20:05:25.049085   72322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0906 20:05:25.059027   72322 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.184
	I0906 20:05:25.059060   72322 kubeadm.go:1160] stopping kube-system containers ...
	I0906 20:05:25.059073   72322 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0906 20:05:25.059128   72322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 20:05:25.096382   72322 cri.go:89] found id: ""
	I0906 20:05:25.096446   72322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0906 20:05:25.114296   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:05:25.126150   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:05:25.126168   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:05:25.126207   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:05:25.136896   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:05:25.136964   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:05:25.148074   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:05:25.158968   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:05:25.159027   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:05:25.169642   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.179183   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:05:25.179258   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:05:25.189449   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:05:25.199237   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:05:25.199286   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:05:25.209663   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:05:25.220511   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:25.336312   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.475543   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.139195419s)
	I0906 20:05:26.475586   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.700018   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.768678   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:26.901831   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:05:26.901928   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.401987   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.903023   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.957637   72322 api_server.go:72] duration metric: took 1.055807s to wait for apiserver process to appear ...
	I0906 20:05:27.957664   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:05:27.957684   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:27.958196   72322 api_server.go:269] stopped: https://192.168.61.184:8443/healthz: Get "https://192.168.61.184:8443/healthz": dial tcp 192.168.61.184:8443: connect: connection refused
	I0906 20:05:28.458421   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:25.706669   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.206691   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:26.707336   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.206666   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.706715   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.206488   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:28.706489   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.207461   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:29.707293   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:30.206591   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:27.840001   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:29.840101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.768451   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0906 20:05:30.768482   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0906 20:05:30.768505   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.868390   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.868430   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:30.958611   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:30.964946   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:30.964977   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.458125   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.462130   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.462155   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:31.958761   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:31.963320   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0906 20:05:31.963347   72322 api_server.go:103] status: https://192.168.61.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0906 20:05:32.458596   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:05:32.464885   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:05:32.474582   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:05:32.474616   72322 api_server.go:131] duration metric: took 4.51694462s to wait for apiserver health ...
	I0906 20:05:32.474627   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:05:32.474635   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:05:32.476583   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:05:28.157326   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:30.657628   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:32.477797   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:05:32.490715   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:05:32.510816   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:05:32.529192   72322 system_pods.go:59] 8 kube-system pods found
	I0906 20:05:32.529236   72322 system_pods.go:61] "coredns-6f6b679f8f-s7tnx" [ce438653-a3b9-4412-8705-7d2db7df5d01] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0906 20:05:32.529254   72322 system_pods.go:61] "etcd-no-preload-504385" [6ec6b2a1-c22a-44b4-b726-808a56f2be2d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0906 20:05:32.529266   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [5f2baa0b-3cf3-4e0d-984b-80fa19adb3b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0906 20:05:32.529275   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [59ffbd51-6a83-43e6-8ef7-bc1cfd80b4d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0906 20:05:32.529292   72322 system_pods.go:61] "kube-proxy-dg8sg" [2e0393f3-b9bd-4603-b800-e1a2fdbf71d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0906 20:05:32.529300   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [52a74c91-a6ec-4d64-8651-e1f87db21b40] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0906 20:05:32.529306   72322 system_pods.go:61] "metrics-server-6867b74b74-nn295" [9d0f51d1-7abf-4f63-bef7-c02f6cd89c5d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:05:32.529313   72322 system_pods.go:61] "storage-provisioner" [69ed0066-2b84-4a4d-91e5-1e25bb3f31eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0906 20:05:32.529320   72322 system_pods.go:74] duration metric: took 18.48107ms to wait for pod list to return data ...
	I0906 20:05:32.529333   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:05:32.535331   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:05:32.535363   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:05:32.535376   72322 node_conditions.go:105] duration metric: took 6.037772ms to run NodePressure ...
	I0906 20:05:32.535397   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0906 20:05:32.955327   72322 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962739   72322 kubeadm.go:739] kubelet initialised
	I0906 20:05:32.962767   72322 kubeadm.go:740] duration metric: took 7.415054ms waiting for restarted kubelet to initialise ...
	I0906 20:05:32.962776   72322 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:05:32.980280   72322 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:30.707091   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.207070   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:31.707224   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.207295   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.707195   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.207373   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:33.707519   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.207428   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:34.706808   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:35.207396   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:32.340006   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.838636   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:36.838703   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:33.155769   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.156761   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:34.994689   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.487610   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:35.707415   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.206955   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:36.706868   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.206515   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:37.706659   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.206735   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.706915   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.207300   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:39.707211   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:40.207085   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:38.839362   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:41.338875   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:37.657190   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.158940   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:39.986557   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.486518   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:40.706720   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.206896   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:41.707281   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.206751   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:42.706754   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.206987   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.707245   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.207502   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:44.707112   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:45.206569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:43.339353   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.838975   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:42.657187   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.156196   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:47.157014   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:43.986675   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.986701   72322 pod_ready.go:82] duration metric: took 11.006397745s for pod "coredns-6f6b679f8f-s7tnx" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.986710   72322 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991650   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:43.991671   72322 pod_ready.go:82] duration metric: took 4.955425ms for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:43.991680   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997218   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:44.997242   72322 pod_ready.go:82] duration metric: took 1.005553613s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:44.997253   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002155   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.002177   72322 pod_ready.go:82] duration metric: took 4.916677ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.002186   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006610   72322 pod_ready.go:93] pod "kube-proxy-dg8sg" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.006631   72322 pod_ready.go:82] duration metric: took 4.439092ms for pod "kube-proxy-dg8sg" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.006639   72322 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185114   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:05:45.185139   72322 pod_ready.go:82] duration metric: took 178.494249ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:45.185149   72322 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	I0906 20:05:47.191676   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:45.707450   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.207446   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:46.707006   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.206484   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:47.707168   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.207536   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.707554   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.206894   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:49.706709   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:50.206799   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:48.338355   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.839372   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.157301   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.157426   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:49.193619   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:51.692286   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:50.707012   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.206914   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:51.706917   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.207465   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:52.706682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.206565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.706757   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.206600   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:54.706926   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:55.207382   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:53.338845   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.339570   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:53.656904   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.158806   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:54.191331   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:56.192498   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:55.707103   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.206621   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:56.707156   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.207277   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:57.706568   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:05:58.206599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:05:58.206698   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:05:58.245828   73230 cri.go:89] found id: ""
	I0906 20:05:58.245857   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.245868   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:05:58.245875   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:05:58.245938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:05:58.283189   73230 cri.go:89] found id: ""
	I0906 20:05:58.283217   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.283228   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:05:58.283235   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:05:58.283303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:05:58.320834   73230 cri.go:89] found id: ""
	I0906 20:05:58.320868   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.320880   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:05:58.320889   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:05:58.320944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:05:58.356126   73230 cri.go:89] found id: ""
	I0906 20:05:58.356152   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.356162   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:05:58.356169   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:05:58.356227   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:05:58.395951   73230 cri.go:89] found id: ""
	I0906 20:05:58.395977   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.395987   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:05:58.395994   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:05:58.396061   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:05:58.431389   73230 cri.go:89] found id: ""
	I0906 20:05:58.431415   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.431426   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:05:58.431433   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:05:58.431511   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:05:58.466255   73230 cri.go:89] found id: ""
	I0906 20:05:58.466285   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.466294   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:05:58.466300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:05:58.466356   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:05:58.505963   73230 cri.go:89] found id: ""
	I0906 20:05:58.505989   73230 logs.go:276] 0 containers: []
	W0906 20:05:58.505997   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:05:58.506006   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:05:58.506018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:05:58.579027   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:05:58.579061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:05:58.620332   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:05:58.620365   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:05:58.675017   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:05:58.675052   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:05:58.689944   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:05:58.689970   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:05:58.825396   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:05:57.838610   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.339329   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.656312   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.656996   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:05:58.691099   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:00.692040   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.192516   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:01.326375   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:01.340508   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:01.340570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:01.375429   73230 cri.go:89] found id: ""
	I0906 20:06:01.375460   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.375470   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:01.375478   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:01.375539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:01.410981   73230 cri.go:89] found id: ""
	I0906 20:06:01.411008   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.411019   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:01.411026   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:01.411083   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:01.448925   73230 cri.go:89] found id: ""
	I0906 20:06:01.448957   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.448968   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:01.448975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:01.449040   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:01.492063   73230 cri.go:89] found id: ""
	I0906 20:06:01.492094   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.492104   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:01.492112   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:01.492181   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:01.557779   73230 cri.go:89] found id: ""
	I0906 20:06:01.557812   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.557823   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:01.557830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:01.557892   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:01.604397   73230 cri.go:89] found id: ""
	I0906 20:06:01.604424   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.604432   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:01.604437   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:01.604482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:01.642249   73230 cri.go:89] found id: ""
	I0906 20:06:01.642280   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.642292   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:01.642300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:01.642364   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:01.692434   73230 cri.go:89] found id: ""
	I0906 20:06:01.692462   73230 logs.go:276] 0 containers: []
	W0906 20:06:01.692474   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:01.692483   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:01.692498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:01.705860   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:01.705884   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:01.783929   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:01.783954   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:01.783965   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:01.864347   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:01.864385   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:01.902284   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:01.902311   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:04.456090   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:04.469775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:04.469840   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:04.505742   73230 cri.go:89] found id: ""
	I0906 20:06:04.505769   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.505778   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:04.505783   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:04.505835   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:04.541787   73230 cri.go:89] found id: ""
	I0906 20:06:04.541811   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.541819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:04.541824   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:04.541874   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:04.578775   73230 cri.go:89] found id: ""
	I0906 20:06:04.578806   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.578817   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:04.578825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:04.578885   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:04.614505   73230 cri.go:89] found id: ""
	I0906 20:06:04.614533   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.614542   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:04.614548   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:04.614594   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:04.652988   73230 cri.go:89] found id: ""
	I0906 20:06:04.653016   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.653027   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:04.653035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:04.653104   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:04.692380   73230 cri.go:89] found id: ""
	I0906 20:06:04.692408   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.692416   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:04.692423   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:04.692478   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:04.729846   73230 cri.go:89] found id: ""
	I0906 20:06:04.729869   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.729880   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:04.729887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:04.729953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:04.766341   73230 cri.go:89] found id: ""
	I0906 20:06:04.766370   73230 logs.go:276] 0 containers: []
	W0906 20:06:04.766379   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:04.766390   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:04.766405   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:04.779801   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:04.779828   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:04.855313   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:04.855334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:04.855346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:04.934210   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:04.934246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:04.975589   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:04.975621   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:02.839427   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:04.840404   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:03.158048   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.655510   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:05.192558   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.692755   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.528622   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:07.544085   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:07.544156   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:07.588106   73230 cri.go:89] found id: ""
	I0906 20:06:07.588139   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.588149   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:07.588157   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:07.588210   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:07.630440   73230 cri.go:89] found id: ""
	I0906 20:06:07.630476   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.630494   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:07.630500   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:07.630551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:07.668826   73230 cri.go:89] found id: ""
	I0906 20:06:07.668870   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.668889   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:07.668898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:07.668962   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:07.706091   73230 cri.go:89] found id: ""
	I0906 20:06:07.706118   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.706130   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:07.706138   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:07.706196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:07.741679   73230 cri.go:89] found id: ""
	I0906 20:06:07.741708   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.741719   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:07.741726   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:07.741792   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:07.778240   73230 cri.go:89] found id: ""
	I0906 20:06:07.778277   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.778288   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:07.778296   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:07.778352   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:07.813183   73230 cri.go:89] found id: ""
	I0906 20:06:07.813212   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.813224   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:07.813232   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:07.813294   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:07.853938   73230 cri.go:89] found id: ""
	I0906 20:06:07.853970   73230 logs.go:276] 0 containers: []
	W0906 20:06:07.853980   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:07.853988   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:07.854001   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:07.893540   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:07.893567   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:07.944219   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:07.944262   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:07.959601   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:07.959635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:08.034487   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:08.034513   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:08.034529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:07.339634   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:09.838953   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:07.658315   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.157980   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.192738   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.691823   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:10.611413   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:10.625273   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:10.625353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:10.664568   73230 cri.go:89] found id: ""
	I0906 20:06:10.664597   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.664609   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:10.664617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:10.664680   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:10.702743   73230 cri.go:89] found id: ""
	I0906 20:06:10.702772   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.702783   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:10.702790   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:10.702850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:10.739462   73230 cri.go:89] found id: ""
	I0906 20:06:10.739487   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.739504   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:10.739511   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:10.739572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:10.776316   73230 cri.go:89] found id: ""
	I0906 20:06:10.776344   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.776355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:10.776362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:10.776420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:10.809407   73230 cri.go:89] found id: ""
	I0906 20:06:10.809440   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.809451   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:10.809459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:10.809519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:10.844736   73230 cri.go:89] found id: ""
	I0906 20:06:10.844765   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.844777   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:10.844784   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:10.844851   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:10.880658   73230 cri.go:89] found id: ""
	I0906 20:06:10.880685   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.880693   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:10.880698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:10.880753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:10.917032   73230 cri.go:89] found id: ""
	I0906 20:06:10.917063   73230 logs.go:276] 0 containers: []
	W0906 20:06:10.917074   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:10.917085   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:10.917100   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:10.980241   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:10.980272   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:10.995389   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:10.995435   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:11.070285   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:11.070313   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:11.070328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:11.155574   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:11.155607   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:13.703712   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:13.718035   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:13.718093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:13.753578   73230 cri.go:89] found id: ""
	I0906 20:06:13.753603   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.753611   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:13.753617   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:13.753659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:13.790652   73230 cri.go:89] found id: ""
	I0906 20:06:13.790681   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.790691   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:13.790697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:13.790749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:13.824243   73230 cri.go:89] found id: ""
	I0906 20:06:13.824278   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.824288   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:13.824293   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:13.824342   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:13.859647   73230 cri.go:89] found id: ""
	I0906 20:06:13.859691   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.859702   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:13.859721   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:13.859781   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:13.897026   73230 cri.go:89] found id: ""
	I0906 20:06:13.897061   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.897068   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:13.897075   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:13.897131   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:13.933904   73230 cri.go:89] found id: ""
	I0906 20:06:13.933927   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.933935   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:13.933941   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:13.933986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:13.969168   73230 cri.go:89] found id: ""
	I0906 20:06:13.969198   73230 logs.go:276] 0 containers: []
	W0906 20:06:13.969210   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:13.969218   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:13.969295   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:14.005808   73230 cri.go:89] found id: ""
	I0906 20:06:14.005838   73230 logs.go:276] 0 containers: []
	W0906 20:06:14.005849   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:14.005862   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:14.005878   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:14.060878   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:14.060915   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:14.075388   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:14.075414   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:14.144942   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:14.144966   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:14.144981   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:14.233088   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:14.233139   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:12.338579   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.839062   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:12.655992   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.657020   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.157119   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:14.692103   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:17.193196   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:16.776744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:16.790292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:16.790384   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:16.828877   73230 cri.go:89] found id: ""
	I0906 20:06:16.828910   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.828921   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:16.828929   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:16.829016   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:16.864413   73230 cri.go:89] found id: ""
	I0906 20:06:16.864440   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.864449   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:16.864455   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:16.864525   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:16.908642   73230 cri.go:89] found id: ""
	I0906 20:06:16.908676   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.908687   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:16.908694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:16.908748   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:16.952247   73230 cri.go:89] found id: ""
	I0906 20:06:16.952278   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.952286   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:16.952292   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:16.952343   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:16.990986   73230 cri.go:89] found id: ""
	I0906 20:06:16.991013   73230 logs.go:276] 0 containers: []
	W0906 20:06:16.991022   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:16.991028   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:16.991077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:17.031002   73230 cri.go:89] found id: ""
	I0906 20:06:17.031034   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.031045   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:17.031052   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:17.031114   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:17.077533   73230 cri.go:89] found id: ""
	I0906 20:06:17.077560   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.077572   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:17.077579   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:17.077646   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:17.116770   73230 cri.go:89] found id: ""
	I0906 20:06:17.116798   73230 logs.go:276] 0 containers: []
	W0906 20:06:17.116806   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:17.116817   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:17.116834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.169300   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:17.169337   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:17.184266   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:17.184299   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:17.266371   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:17.266400   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:17.266419   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:17.343669   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:17.343698   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:19.886541   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:19.899891   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:19.899951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:19.946592   73230 cri.go:89] found id: ""
	I0906 20:06:19.946621   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.946630   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:19.946636   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:19.946686   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:19.981758   73230 cri.go:89] found id: ""
	I0906 20:06:19.981788   73230 logs.go:276] 0 containers: []
	W0906 20:06:19.981797   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:19.981802   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:19.981854   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:20.018372   73230 cri.go:89] found id: ""
	I0906 20:06:20.018397   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.018405   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:20.018411   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:20.018460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:20.054380   73230 cri.go:89] found id: ""
	I0906 20:06:20.054428   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.054440   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:20.054449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:20.054521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:20.092343   73230 cri.go:89] found id: ""
	I0906 20:06:20.092376   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.092387   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:20.092395   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:20.092463   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:20.128568   73230 cri.go:89] found id: ""
	I0906 20:06:20.128594   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.128604   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:20.128610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:20.128657   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:20.166018   73230 cri.go:89] found id: ""
	I0906 20:06:20.166046   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.166057   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:20.166072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:20.166125   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:20.203319   73230 cri.go:89] found id: ""
	I0906 20:06:20.203347   73230 logs.go:276] 0 containers: []
	W0906 20:06:20.203355   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:20.203365   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:20.203381   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:20.287217   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:20.287243   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:20.287259   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:20.372799   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:20.372834   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:20.416595   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:20.416620   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:17.338546   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.342409   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.838689   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.657411   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:22.157972   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:19.691327   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:21.692066   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:20.468340   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:20.468378   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:22.983259   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:22.997014   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:22.997098   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:23.034483   73230 cri.go:89] found id: ""
	I0906 20:06:23.034513   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.034524   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:23.034531   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:23.034597   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:23.072829   73230 cri.go:89] found id: ""
	I0906 20:06:23.072867   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.072878   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:23.072885   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:23.072949   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:23.110574   73230 cri.go:89] found id: ""
	I0906 20:06:23.110602   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.110613   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:23.110620   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:23.110684   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:23.149506   73230 cri.go:89] found id: ""
	I0906 20:06:23.149538   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.149550   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:23.149557   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:23.149619   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:23.191321   73230 cri.go:89] found id: ""
	I0906 20:06:23.191355   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.191367   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:23.191374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:23.191441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:23.233737   73230 cri.go:89] found id: ""
	I0906 20:06:23.233770   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.233791   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:23.233800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:23.233873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:23.270013   73230 cri.go:89] found id: ""
	I0906 20:06:23.270048   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.270060   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:23.270068   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:23.270127   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:23.309517   73230 cri.go:89] found id: ""
	I0906 20:06:23.309541   73230 logs.go:276] 0 containers: []
	W0906 20:06:23.309549   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:23.309566   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:23.309578   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:23.380645   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:23.380675   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:23.380690   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:23.463656   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:23.463696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:23.504100   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:23.504134   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:23.557438   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:23.557483   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:23.841101   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.340722   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.658261   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:27.155171   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:24.193829   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.690602   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:26.074045   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:26.088006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:26.088072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:26.124445   73230 cri.go:89] found id: ""
	I0906 20:06:26.124469   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.124476   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:26.124482   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:26.124537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:26.158931   73230 cri.go:89] found id: ""
	I0906 20:06:26.158957   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.158968   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:26.158975   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:26.159035   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:26.197125   73230 cri.go:89] found id: ""
	I0906 20:06:26.197154   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.197164   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:26.197171   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:26.197234   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:26.233241   73230 cri.go:89] found id: ""
	I0906 20:06:26.233278   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.233291   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:26.233300   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:26.233366   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:26.269910   73230 cri.go:89] found id: ""
	I0906 20:06:26.269943   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.269955   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:26.269962   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:26.270026   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:26.308406   73230 cri.go:89] found id: ""
	I0906 20:06:26.308439   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.308450   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:26.308459   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:26.308521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:26.344248   73230 cri.go:89] found id: ""
	I0906 20:06:26.344276   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.344288   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:26.344295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:26.344353   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:26.391794   73230 cri.go:89] found id: ""
	I0906 20:06:26.391827   73230 logs.go:276] 0 containers: []
	W0906 20:06:26.391840   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:26.391851   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:26.391866   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:26.444192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:26.444231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:26.459113   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:26.459144   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:26.533920   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:26.533945   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:26.533960   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:26.616382   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:26.616416   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:29.160429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:29.175007   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:29.175063   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:29.212929   73230 cri.go:89] found id: ""
	I0906 20:06:29.212961   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.212972   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:29.212980   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:29.213042   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:29.250777   73230 cri.go:89] found id: ""
	I0906 20:06:29.250806   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.250815   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:29.250821   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:29.250870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:29.292222   73230 cri.go:89] found id: ""
	I0906 20:06:29.292253   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.292262   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:29.292268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:29.292331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:29.328379   73230 cri.go:89] found id: ""
	I0906 20:06:29.328413   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.328431   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:29.328436   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:29.328482   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:29.366792   73230 cri.go:89] found id: ""
	I0906 20:06:29.366822   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.366834   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:29.366841   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:29.366903   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:29.402233   73230 cri.go:89] found id: ""
	I0906 20:06:29.402261   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.402270   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:29.402276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:29.402331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:29.436695   73230 cri.go:89] found id: ""
	I0906 20:06:29.436724   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.436731   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:29.436736   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:29.436787   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:29.473050   73230 cri.go:89] found id: ""
	I0906 20:06:29.473074   73230 logs.go:276] 0 containers: []
	W0906 20:06:29.473082   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:29.473091   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:29.473101   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:29.524981   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:29.525018   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:29.538698   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:29.538722   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:29.611026   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:29.611049   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:29.611064   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:29.686898   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:29.686931   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:28.839118   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:30.839532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:29.156985   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.656552   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:28.694188   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:31.191032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.192623   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:32.228399   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:32.244709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:32.244775   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:32.285681   73230 cri.go:89] found id: ""
	I0906 20:06:32.285713   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.285724   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:32.285732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:32.285794   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:32.325312   73230 cri.go:89] found id: ""
	I0906 20:06:32.325340   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.325349   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:32.325355   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:32.325400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:32.361420   73230 cri.go:89] found id: ""
	I0906 20:06:32.361455   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.361468   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:32.361477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:32.361543   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:32.398881   73230 cri.go:89] found id: ""
	I0906 20:06:32.398956   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.398971   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:32.398984   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:32.399041   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:32.435336   73230 cri.go:89] found id: ""
	I0906 20:06:32.435362   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.435370   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:32.435375   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:32.435427   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:32.472849   73230 cri.go:89] found id: ""
	I0906 20:06:32.472900   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.472909   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:32.472914   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:32.472964   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:32.508176   73230 cri.go:89] found id: ""
	I0906 20:06:32.508199   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.508208   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:32.508213   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:32.508271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:32.550519   73230 cri.go:89] found id: ""
	I0906 20:06:32.550550   73230 logs.go:276] 0 containers: []
	W0906 20:06:32.550561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:32.550576   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:32.550593   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:32.601362   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:32.601394   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:32.614821   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:32.614849   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:32.686044   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:32.686061   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:32.686074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:32.767706   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:32.767744   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:35.309159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:35.322386   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:35.322462   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:35.362909   73230 cri.go:89] found id: ""
	I0906 20:06:35.362937   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.362948   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:35.362955   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:35.363017   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:35.400591   73230 cri.go:89] found id: ""
	I0906 20:06:35.400621   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.400629   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:35.400635   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:35.400682   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:35.436547   73230 cri.go:89] found id: ""
	I0906 20:06:35.436578   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.436589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:35.436596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:35.436666   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:33.338812   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.340154   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:33.656782   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.657043   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.691312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:37.691358   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:35.473130   73230 cri.go:89] found id: ""
	I0906 20:06:35.473155   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.473163   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:35.473168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:35.473244   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:35.509646   73230 cri.go:89] found id: ""
	I0906 20:06:35.509677   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.509687   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:35.509695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:35.509754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:35.547651   73230 cri.go:89] found id: ""
	I0906 20:06:35.547684   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.547696   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:35.547703   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:35.547761   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:35.608590   73230 cri.go:89] found id: ""
	I0906 20:06:35.608614   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.608624   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:35.608631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:35.608691   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:35.651508   73230 cri.go:89] found id: ""
	I0906 20:06:35.651550   73230 logs.go:276] 0 containers: []
	W0906 20:06:35.651561   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:35.651572   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:35.651585   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:35.705502   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:35.705542   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:35.719550   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:35.719577   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:35.791435   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:35.791461   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:35.791476   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:35.869018   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:35.869070   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:38.411587   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:38.425739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:38.425800   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:38.463534   73230 cri.go:89] found id: ""
	I0906 20:06:38.463560   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.463571   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:38.463578   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:38.463628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:38.499238   73230 cri.go:89] found id: ""
	I0906 20:06:38.499269   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.499280   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:38.499287   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:38.499340   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:38.536297   73230 cri.go:89] found id: ""
	I0906 20:06:38.536334   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.536345   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:38.536352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:38.536417   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:38.573672   73230 cri.go:89] found id: ""
	I0906 20:06:38.573701   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.573712   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:38.573720   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:38.573779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:38.610913   73230 cri.go:89] found id: ""
	I0906 20:06:38.610937   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.610945   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:38.610950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:38.610996   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:38.647335   73230 cri.go:89] found id: ""
	I0906 20:06:38.647359   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.647368   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:38.647374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:38.647418   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:38.684054   73230 cri.go:89] found id: ""
	I0906 20:06:38.684084   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.684097   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:38.684106   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:38.684174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:38.731134   73230 cri.go:89] found id: ""
	I0906 20:06:38.731161   73230 logs.go:276] 0 containers: []
	W0906 20:06:38.731173   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:38.731183   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:38.731199   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:38.787757   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:38.787798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:38.802920   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:38.802955   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:38.889219   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:38.889246   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:38.889261   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:38.964999   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:38.965042   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:37.838886   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.338914   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:38.156615   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:40.656577   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:39.691609   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.692330   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:41.504406   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:41.518111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:41.518169   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:41.558701   73230 cri.go:89] found id: ""
	I0906 20:06:41.558727   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.558738   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:41.558746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:41.558807   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:41.595986   73230 cri.go:89] found id: ""
	I0906 20:06:41.596009   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.596017   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:41.596023   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:41.596070   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:41.631462   73230 cri.go:89] found id: ""
	I0906 20:06:41.631486   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.631494   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:41.631504   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:41.631559   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:41.669646   73230 cri.go:89] found id: ""
	I0906 20:06:41.669674   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.669686   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:41.669693   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:41.669754   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:41.708359   73230 cri.go:89] found id: ""
	I0906 20:06:41.708383   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.708391   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:41.708398   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:41.708446   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:41.745712   73230 cri.go:89] found id: ""
	I0906 20:06:41.745737   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.745750   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:41.745756   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:41.745804   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:41.781862   73230 cri.go:89] found id: ""
	I0906 20:06:41.781883   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.781892   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:41.781898   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:41.781946   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:41.816687   73230 cri.go:89] found id: ""
	I0906 20:06:41.816714   73230 logs.go:276] 0 containers: []
	W0906 20:06:41.816722   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:41.816730   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:41.816742   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:41.830115   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:41.830145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:41.908303   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:41.908334   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:41.908348   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:42.001459   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:42.001501   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:42.061341   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:42.061368   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:44.619574   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:44.633355   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:44.633423   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:44.668802   73230 cri.go:89] found id: ""
	I0906 20:06:44.668834   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.668845   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:44.668852   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:44.668924   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:44.707613   73230 cri.go:89] found id: ""
	I0906 20:06:44.707639   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.707650   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:44.707657   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:44.707727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:44.744202   73230 cri.go:89] found id: ""
	I0906 20:06:44.744231   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.744243   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:44.744250   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:44.744311   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:44.783850   73230 cri.go:89] found id: ""
	I0906 20:06:44.783873   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.783881   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:44.783886   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:44.783938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:44.824986   73230 cri.go:89] found id: ""
	I0906 20:06:44.825011   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.825019   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:44.825025   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:44.825073   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:44.865157   73230 cri.go:89] found id: ""
	I0906 20:06:44.865182   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.865190   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:44.865196   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:44.865258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:44.908268   73230 cri.go:89] found id: ""
	I0906 20:06:44.908295   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.908305   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:44.908312   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:44.908359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:44.948669   73230 cri.go:89] found id: ""
	I0906 20:06:44.948697   73230 logs.go:276] 0 containers: []
	W0906 20:06:44.948706   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:44.948716   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:44.948731   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:44.961862   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:44.961887   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:45.036756   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:45.036783   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:45.036801   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:45.116679   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:45.116717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:45.159756   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:45.159784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:42.339271   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.839443   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:43.155878   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:45.158884   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:44.192211   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:46.692140   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.714682   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:47.730754   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:47.730820   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:47.783208   73230 cri.go:89] found id: ""
	I0906 20:06:47.783239   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.783249   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:47.783255   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:47.783312   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:47.844291   73230 cri.go:89] found id: ""
	I0906 20:06:47.844324   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.844336   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:47.844344   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:47.844407   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:47.881877   73230 cri.go:89] found id: ""
	I0906 20:06:47.881905   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.881913   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:47.881919   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:47.881986   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:47.918034   73230 cri.go:89] found id: ""
	I0906 20:06:47.918058   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.918066   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:47.918072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:47.918126   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:47.957045   73230 cri.go:89] found id: ""
	I0906 20:06:47.957068   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.957077   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:47.957083   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:47.957134   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:47.993849   73230 cri.go:89] found id: ""
	I0906 20:06:47.993872   73230 logs.go:276] 0 containers: []
	W0906 20:06:47.993883   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:47.993890   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:47.993951   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:48.031214   73230 cri.go:89] found id: ""
	I0906 20:06:48.031239   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.031249   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:48.031257   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:48.031314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:48.064634   73230 cri.go:89] found id: ""
	I0906 20:06:48.064673   73230 logs.go:276] 0 containers: []
	W0906 20:06:48.064690   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:48.064698   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:48.064710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:48.104307   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:48.104343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:48.158869   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:48.158900   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:48.173000   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:48.173026   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:48.248751   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:48.248774   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:48.248792   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:47.339014   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.339656   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.838817   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:47.656402   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.156349   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:52.156651   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:49.192411   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:51.691635   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:50.833490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:50.847618   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:50.847702   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:50.887141   73230 cri.go:89] found id: ""
	I0906 20:06:50.887167   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.887176   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:50.887181   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:50.887228   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:50.923435   73230 cri.go:89] found id: ""
	I0906 20:06:50.923480   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.923491   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:50.923499   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:50.923567   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:50.959704   73230 cri.go:89] found id: ""
	I0906 20:06:50.959730   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.959742   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:50.959748   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:50.959810   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:50.992994   73230 cri.go:89] found id: ""
	I0906 20:06:50.993023   73230 logs.go:276] 0 containers: []
	W0906 20:06:50.993032   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:50.993037   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:50.993091   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:51.031297   73230 cri.go:89] found id: ""
	I0906 20:06:51.031321   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.031329   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:51.031335   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:51.031390   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:51.067698   73230 cri.go:89] found id: ""
	I0906 20:06:51.067721   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.067732   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:51.067739   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:51.067799   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:51.102240   73230 cri.go:89] found id: ""
	I0906 20:06:51.102268   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.102278   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:51.102285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:51.102346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:51.137146   73230 cri.go:89] found id: ""
	I0906 20:06:51.137172   73230 logs.go:276] 0 containers: []
	W0906 20:06:51.137183   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:51.137194   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:51.137209   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:51.216158   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:51.216194   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:51.256063   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:51.256088   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:51.309176   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:51.309210   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:51.323515   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:51.323544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:51.393281   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:53.893714   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:53.907807   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:53.907863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:53.947929   73230 cri.go:89] found id: ""
	I0906 20:06:53.947954   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.947962   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:53.947968   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:53.948014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:53.983005   73230 cri.go:89] found id: ""
	I0906 20:06:53.983028   73230 logs.go:276] 0 containers: []
	W0906 20:06:53.983041   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:53.983046   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:53.983094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:54.019004   73230 cri.go:89] found id: ""
	I0906 20:06:54.019027   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.019035   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:54.019041   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:54.019094   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:54.060240   73230 cri.go:89] found id: ""
	I0906 20:06:54.060266   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.060279   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:54.060285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:54.060336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:54.096432   73230 cri.go:89] found id: ""
	I0906 20:06:54.096461   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.096469   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:54.096475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:54.096537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:54.132992   73230 cri.go:89] found id: ""
	I0906 20:06:54.133021   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.133033   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:54.133040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:54.133103   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:54.172730   73230 cri.go:89] found id: ""
	I0906 20:06:54.172754   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.172766   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:54.172778   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:54.172839   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:54.212050   73230 cri.go:89] found id: ""
	I0906 20:06:54.212191   73230 logs.go:276] 0 containers: []
	W0906 20:06:54.212202   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:54.212212   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:54.212234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:54.263603   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:54.263647   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:54.281291   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:54.281324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:54.359523   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:54.359545   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:54.359568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:54.442230   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:54.442265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:06:54.339159   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.841459   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.157379   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.656134   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:54.191878   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.691766   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:56.983744   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:06:56.997451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:06:56.997527   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:06:57.034792   73230 cri.go:89] found id: ""
	I0906 20:06:57.034817   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.034825   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:06:57.034831   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:06:57.034883   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:06:57.073709   73230 cri.go:89] found id: ""
	I0906 20:06:57.073735   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.073745   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:06:57.073751   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:06:57.073803   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:06:57.122758   73230 cri.go:89] found id: ""
	I0906 20:06:57.122787   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.122798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:06:57.122808   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:06:57.122865   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:06:57.158208   73230 cri.go:89] found id: ""
	I0906 20:06:57.158242   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.158252   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:06:57.158262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:06:57.158323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:06:57.194004   73230 cri.go:89] found id: ""
	I0906 20:06:57.194029   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.194037   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:06:57.194044   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:06:57.194099   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:06:57.230068   73230 cri.go:89] found id: ""
	I0906 20:06:57.230099   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.230111   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:06:57.230119   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:06:57.230186   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:06:57.265679   73230 cri.go:89] found id: ""
	I0906 20:06:57.265707   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.265718   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:06:57.265735   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:06:57.265801   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:06:57.304917   73230 cri.go:89] found id: ""
	I0906 20:06:57.304946   73230 logs.go:276] 0 containers: []
	W0906 20:06:57.304956   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:06:57.304967   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:06:57.304980   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:06:57.357238   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:06:57.357276   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:06:57.371648   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:06:57.371674   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:06:57.438572   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:06:57.438590   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:06:57.438602   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:06:57.528212   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:06:57.528256   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:00.071140   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:00.084975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:00.085055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:00.119680   73230 cri.go:89] found id: ""
	I0906 20:07:00.119713   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.119725   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:00.119732   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:00.119786   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:00.155678   73230 cri.go:89] found id: ""
	I0906 20:07:00.155704   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.155716   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:00.155723   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:00.155769   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:00.190758   73230 cri.go:89] found id: ""
	I0906 20:07:00.190783   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.190793   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:00.190799   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:00.190863   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:00.228968   73230 cri.go:89] found id: ""
	I0906 20:07:00.228999   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.229010   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:00.229018   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:00.229079   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:00.265691   73230 cri.go:89] found id: ""
	I0906 20:07:00.265722   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.265733   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:00.265741   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:00.265806   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:00.305785   73230 cri.go:89] found id: ""
	I0906 20:07:00.305812   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.305820   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:00.305825   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:00.305872   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:00.341872   73230 cri.go:89] found id: ""
	I0906 20:07:00.341895   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.341902   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:00.341907   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:00.341955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:00.377661   73230 cri.go:89] found id: ""
	I0906 20:07:00.377690   73230 logs.go:276] 0 containers: []
	W0906 20:07:00.377702   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:00.377712   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:00.377725   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:00.428215   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:00.428254   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:00.443135   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:00.443165   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0906 20:06:59.337996   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.338924   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:58.657236   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.156973   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:06:59.191556   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:01.192082   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.193511   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	W0906 20:07:00.518745   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:00.518768   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:00.518781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:00.604413   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:00.604448   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.146657   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:03.160610   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:03.160665   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:03.200916   73230 cri.go:89] found id: ""
	I0906 20:07:03.200950   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.200960   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:03.200967   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:03.201029   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:03.239550   73230 cri.go:89] found id: ""
	I0906 20:07:03.239579   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.239592   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:03.239600   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:03.239660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:03.278216   73230 cri.go:89] found id: ""
	I0906 20:07:03.278244   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.278255   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:03.278263   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:03.278325   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:03.315028   73230 cri.go:89] found id: ""
	I0906 20:07:03.315059   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.315073   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:03.315080   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:03.315146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:03.354614   73230 cri.go:89] found id: ""
	I0906 20:07:03.354638   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.354647   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:03.354652   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:03.354710   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:03.390105   73230 cri.go:89] found id: ""
	I0906 20:07:03.390129   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.390138   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:03.390144   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:03.390190   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:03.427651   73230 cri.go:89] found id: ""
	I0906 20:07:03.427679   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.427687   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:03.427695   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:03.427763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:03.463191   73230 cri.go:89] found id: ""
	I0906 20:07:03.463220   73230 logs.go:276] 0 containers: []
	W0906 20:07:03.463230   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:03.463242   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:03.463288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:03.476966   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:03.476995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:03.558415   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:03.558441   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:03.558457   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:03.641528   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:03.641564   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:03.680916   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:03.680943   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:03.339511   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.340113   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:03.157907   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.160507   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:05.692151   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:08.191782   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:06.235947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:06.249589   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:06.249667   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:06.289193   73230 cri.go:89] found id: ""
	I0906 20:07:06.289223   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.289235   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:06.289242   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:06.289305   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:06.324847   73230 cri.go:89] found id: ""
	I0906 20:07:06.324887   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.324898   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:06.324904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:06.324966   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:06.361755   73230 cri.go:89] found id: ""
	I0906 20:07:06.361786   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.361798   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:06.361806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:06.361873   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:06.397739   73230 cri.go:89] found id: ""
	I0906 20:07:06.397766   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.397775   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:06.397780   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:06.397833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:06.432614   73230 cri.go:89] found id: ""
	I0906 20:07:06.432641   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.432649   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:06.432655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:06.432703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:06.467784   73230 cri.go:89] found id: ""
	I0906 20:07:06.467812   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.467823   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:06.467830   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:06.467890   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:06.507055   73230 cri.go:89] found id: ""
	I0906 20:07:06.507085   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.507096   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:06.507104   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:06.507165   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:06.544688   73230 cri.go:89] found id: ""
	I0906 20:07:06.544720   73230 logs.go:276] 0 containers: []
	W0906 20:07:06.544730   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:06.544740   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:06.544751   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:06.597281   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:06.597314   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:06.612749   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:06.612774   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:06.684973   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:06.684993   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:06.685006   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:06.764306   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:06.764345   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.304340   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:09.317460   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:09.317536   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:09.354289   73230 cri.go:89] found id: ""
	I0906 20:07:09.354312   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.354322   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:09.354327   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:09.354373   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:09.390962   73230 cri.go:89] found id: ""
	I0906 20:07:09.390997   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.391008   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:09.391015   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:09.391076   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:09.427456   73230 cri.go:89] found id: ""
	I0906 20:07:09.427491   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.427502   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:09.427510   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:09.427572   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:09.462635   73230 cri.go:89] found id: ""
	I0906 20:07:09.462667   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.462680   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:09.462687   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:09.462749   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:09.506726   73230 cri.go:89] found id: ""
	I0906 20:07:09.506751   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.506767   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:09.506775   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:09.506836   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:09.541974   73230 cri.go:89] found id: ""
	I0906 20:07:09.541999   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.542009   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:09.542017   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:09.542077   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:09.580069   73230 cri.go:89] found id: ""
	I0906 20:07:09.580104   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.580115   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:09.580123   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:09.580182   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:09.616025   73230 cri.go:89] found id: ""
	I0906 20:07:09.616054   73230 logs.go:276] 0 containers: []
	W0906 20:07:09.616065   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:09.616075   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:09.616090   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:09.630967   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:09.630993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:09.716733   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:09.716766   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:09.716782   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:09.792471   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:09.792503   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:09.832326   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:09.832357   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:07.840909   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.339239   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:07.655710   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:09.656069   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:11.656458   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:10.192155   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.192716   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:12.385565   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:12.398694   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:12.398768   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:12.437446   73230 cri.go:89] found id: ""
	I0906 20:07:12.437473   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.437482   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:12.437487   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:12.437555   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:12.473328   73230 cri.go:89] found id: ""
	I0906 20:07:12.473355   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.473362   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:12.473372   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:12.473429   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:12.510935   73230 cri.go:89] found id: ""
	I0906 20:07:12.510962   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.510972   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:12.510979   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:12.511044   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:12.547961   73230 cri.go:89] found id: ""
	I0906 20:07:12.547991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.547999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:12.548005   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:12.548062   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:12.585257   73230 cri.go:89] found id: ""
	I0906 20:07:12.585291   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.585302   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:12.585309   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:12.585369   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:12.623959   73230 cri.go:89] found id: ""
	I0906 20:07:12.623991   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.624003   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:12.624010   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:12.624066   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:12.662795   73230 cri.go:89] found id: ""
	I0906 20:07:12.662822   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.662832   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:12.662840   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:12.662896   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:12.700941   73230 cri.go:89] found id: ""
	I0906 20:07:12.700967   73230 logs.go:276] 0 containers: []
	W0906 20:07:12.700974   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:12.700983   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:12.700994   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:12.785989   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:12.786025   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:12.826678   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:12.826704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:12.881558   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:12.881599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:12.896035   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:12.896065   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:12.970721   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:12.839031   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.339615   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:13.656809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.657470   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:14.691032   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:16.692697   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:15.471171   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:15.484466   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:15.484541   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:15.518848   73230 cri.go:89] found id: ""
	I0906 20:07:15.518875   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.518886   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:15.518894   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:15.518953   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:15.553444   73230 cri.go:89] found id: ""
	I0906 20:07:15.553468   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.553476   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:15.553482   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:15.553528   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:15.589136   73230 cri.go:89] found id: ""
	I0906 20:07:15.589160   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.589168   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:15.589173   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:15.589220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:15.624410   73230 cri.go:89] found id: ""
	I0906 20:07:15.624434   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.624443   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:15.624449   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:15.624492   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:15.661506   73230 cri.go:89] found id: ""
	I0906 20:07:15.661535   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.661547   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:15.661555   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:15.661615   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:15.699126   73230 cri.go:89] found id: ""
	I0906 20:07:15.699148   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.699155   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:15.699161   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:15.699207   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:15.736489   73230 cri.go:89] found id: ""
	I0906 20:07:15.736523   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.736534   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:15.736542   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:15.736604   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:15.771988   73230 cri.go:89] found id: ""
	I0906 20:07:15.772013   73230 logs.go:276] 0 containers: []
	W0906 20:07:15.772020   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:15.772029   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:15.772045   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:15.822734   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:15.822765   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:15.836820   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:15.836872   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:15.915073   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:15.915111   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:15.915126   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:15.988476   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:15.988514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:18.528710   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:18.541450   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:18.541526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:18.581278   73230 cri.go:89] found id: ""
	I0906 20:07:18.581308   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.581317   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:18.581323   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:18.581381   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:18.616819   73230 cri.go:89] found id: ""
	I0906 20:07:18.616843   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.616850   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:18.616871   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:18.616923   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:18.655802   73230 cri.go:89] found id: ""
	I0906 20:07:18.655827   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.655842   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:18.655849   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:18.655908   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:18.693655   73230 cri.go:89] found id: ""
	I0906 20:07:18.693679   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.693689   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:18.693696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:18.693779   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:18.730882   73230 cri.go:89] found id: ""
	I0906 20:07:18.730914   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.730924   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:18.730931   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:18.730994   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:18.767219   73230 cri.go:89] found id: ""
	I0906 20:07:18.767243   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.767250   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:18.767256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:18.767316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:18.802207   73230 cri.go:89] found id: ""
	I0906 20:07:18.802230   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.802238   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:18.802243   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:18.802300   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:18.840449   73230 cri.go:89] found id: ""
	I0906 20:07:18.840471   73230 logs.go:276] 0 containers: []
	W0906 20:07:18.840481   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:18.840491   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:18.840504   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:18.892430   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:18.892469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:18.906527   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:18.906561   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:18.980462   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:18.980483   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:18.980494   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:19.059550   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:19.059588   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:17.340292   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:19.840090   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.156486   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:20.657764   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:18.693021   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.191529   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.191865   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:21.599879   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:21.614131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:21.614205   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:21.650887   73230 cri.go:89] found id: ""
	I0906 20:07:21.650910   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.650919   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:21.650924   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:21.650978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:21.684781   73230 cri.go:89] found id: ""
	I0906 20:07:21.684809   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.684819   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:21.684827   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:21.684907   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:21.722685   73230 cri.go:89] found id: ""
	I0906 20:07:21.722711   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.722722   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:21.722729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:21.722791   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:21.757581   73230 cri.go:89] found id: ""
	I0906 20:07:21.757607   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.757616   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:21.757622   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:21.757670   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:21.791984   73230 cri.go:89] found id: ""
	I0906 20:07:21.792008   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.792016   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:21.792022   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:21.792072   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:21.853612   73230 cri.go:89] found id: ""
	I0906 20:07:21.853636   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.853644   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:21.853650   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:21.853699   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:21.894184   73230 cri.go:89] found id: ""
	I0906 20:07:21.894232   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.894247   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:21.894256   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:21.894318   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:21.930731   73230 cri.go:89] found id: ""
	I0906 20:07:21.930758   73230 logs.go:276] 0 containers: []
	W0906 20:07:21.930768   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:21.930779   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:21.930798   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:21.969174   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:21.969207   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:22.017647   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:22.017680   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:22.033810   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:22.033852   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:22.111503   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:22.111530   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:22.111544   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:24.696348   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:24.710428   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:24.710506   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:24.747923   73230 cri.go:89] found id: ""
	I0906 20:07:24.747958   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.747969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:24.747977   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:24.748037   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:24.782216   73230 cri.go:89] found id: ""
	I0906 20:07:24.782250   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.782260   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:24.782268   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:24.782329   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:24.822093   73230 cri.go:89] found id: ""
	I0906 20:07:24.822126   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.822137   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:24.822148   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:24.822217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:24.857166   73230 cri.go:89] found id: ""
	I0906 20:07:24.857202   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.857213   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:24.857224   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:24.857314   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:24.892575   73230 cri.go:89] found id: ""
	I0906 20:07:24.892610   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.892621   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:24.892629   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:24.892689   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:24.929102   73230 cri.go:89] found id: ""
	I0906 20:07:24.929130   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.929140   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:24.929149   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:24.929206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:24.964224   73230 cri.go:89] found id: ""
	I0906 20:07:24.964257   73230 logs.go:276] 0 containers: []
	W0906 20:07:24.964268   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:24.964276   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:24.964337   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:25.000453   73230 cri.go:89] found id: ""
	I0906 20:07:25.000475   73230 logs.go:276] 0 containers: []
	W0906 20:07:25.000485   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:25.000496   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:25.000511   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:25.041824   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:25.041851   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:25.093657   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:25.093692   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:25.107547   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:25.107576   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:25.178732   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:25.178755   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:25.178771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:22.338864   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:24.339432   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:26.838165   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:23.156449   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.156979   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.158086   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:25.192653   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.693480   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:27.764271   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:27.777315   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:27.777389   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:27.812621   73230 cri.go:89] found id: ""
	I0906 20:07:27.812644   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.812655   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:27.812663   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:27.812718   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:27.853063   73230 cri.go:89] found id: ""
	I0906 20:07:27.853093   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.853104   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:27.853112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:27.853171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:27.894090   73230 cri.go:89] found id: ""
	I0906 20:07:27.894118   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.894130   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:27.894137   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:27.894196   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:27.930764   73230 cri.go:89] found id: ""
	I0906 20:07:27.930791   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.930802   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:27.930809   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:27.930870   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:27.967011   73230 cri.go:89] found id: ""
	I0906 20:07:27.967036   73230 logs.go:276] 0 containers: []
	W0906 20:07:27.967047   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:27.967053   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:27.967111   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:28.002119   73230 cri.go:89] found id: ""
	I0906 20:07:28.002146   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.002157   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:28.002164   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:28.002226   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:28.043884   73230 cri.go:89] found id: ""
	I0906 20:07:28.043909   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.043917   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:28.043923   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:28.043979   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:28.081510   73230 cri.go:89] found id: ""
	I0906 20:07:28.081538   73230 logs.go:276] 0 containers: []
	W0906 20:07:28.081547   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:28.081557   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:28.081568   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:28.159077   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:28.159109   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:28.207489   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:28.207527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:28.267579   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:28.267613   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:28.287496   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:28.287529   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:28.376555   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:28.838301   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.843091   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:29.655598   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:31.657757   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.192112   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:32.692354   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:30.876683   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:30.890344   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:30.890424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:30.930618   73230 cri.go:89] found id: ""
	I0906 20:07:30.930647   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.930658   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:30.930666   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:30.930727   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:30.968801   73230 cri.go:89] found id: ""
	I0906 20:07:30.968825   73230 logs.go:276] 0 containers: []
	W0906 20:07:30.968834   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:30.968839   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:30.968911   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:31.006437   73230 cri.go:89] found id: ""
	I0906 20:07:31.006463   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.006472   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:31.006477   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:31.006531   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:31.042091   73230 cri.go:89] found id: ""
	I0906 20:07:31.042117   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.042125   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:31.042131   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:31.042177   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:31.079244   73230 cri.go:89] found id: ""
	I0906 20:07:31.079271   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.079280   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:31.079286   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:31.079336   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:31.116150   73230 cri.go:89] found id: ""
	I0906 20:07:31.116174   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.116182   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:31.116188   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:31.116240   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:31.151853   73230 cri.go:89] found id: ""
	I0906 20:07:31.151877   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.151886   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:31.151892   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:31.151939   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:31.189151   73230 cri.go:89] found id: ""
	I0906 20:07:31.189181   73230 logs.go:276] 0 containers: []
	W0906 20:07:31.189192   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:31.189203   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:31.189218   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:31.234466   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:31.234493   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:31.286254   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:31.286288   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:31.300500   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:31.300525   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:31.372968   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:31.372987   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:31.372997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:33.949865   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:33.964791   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:33.964849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:34.027049   73230 cri.go:89] found id: ""
	I0906 20:07:34.027082   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.027094   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:34.027102   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:34.027162   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:34.080188   73230 cri.go:89] found id: ""
	I0906 20:07:34.080218   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.080230   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:34.080237   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:34.080320   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:34.124146   73230 cri.go:89] found id: ""
	I0906 20:07:34.124171   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.124179   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:34.124185   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:34.124230   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:34.161842   73230 cri.go:89] found id: ""
	I0906 20:07:34.161864   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.161872   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:34.161878   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:34.161938   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:34.201923   73230 cri.go:89] found id: ""
	I0906 20:07:34.201951   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.201961   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:34.201967   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:34.202032   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:34.246609   73230 cri.go:89] found id: ""
	I0906 20:07:34.246644   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.246656   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:34.246665   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:34.246739   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:34.287616   73230 cri.go:89] found id: ""
	I0906 20:07:34.287646   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.287657   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:34.287663   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:34.287721   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:34.322270   73230 cri.go:89] found id: ""
	I0906 20:07:34.322297   73230 logs.go:276] 0 containers: []
	W0906 20:07:34.322309   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:34.322320   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:34.322334   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:34.378598   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:34.378633   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:34.392748   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:34.392781   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:34.468620   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:34.468648   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:34.468663   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:34.548290   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:34.548324   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:33.339665   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.339890   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:34.157895   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:36.656829   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:35.192386   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.192574   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:37.095962   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:37.110374   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:37.110459   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:37.146705   73230 cri.go:89] found id: ""
	I0906 20:07:37.146732   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.146740   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:37.146746   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:37.146802   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:37.185421   73230 cri.go:89] found id: ""
	I0906 20:07:37.185449   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.185461   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:37.185468   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:37.185532   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:37.224767   73230 cri.go:89] found id: ""
	I0906 20:07:37.224793   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.224801   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:37.224806   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:37.224884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:37.265392   73230 cri.go:89] found id: ""
	I0906 20:07:37.265422   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.265432   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:37.265438   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:37.265496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:37.302065   73230 cri.go:89] found id: ""
	I0906 20:07:37.302093   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.302101   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:37.302107   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:37.302171   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:37.341466   73230 cri.go:89] found id: ""
	I0906 20:07:37.341493   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.341505   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:37.341513   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:37.341576   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.377701   73230 cri.go:89] found id: ""
	I0906 20:07:37.377724   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.377732   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:37.377738   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:37.377798   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:37.412927   73230 cri.go:89] found id: ""
	I0906 20:07:37.412955   73230 logs.go:276] 0 containers: []
	W0906 20:07:37.412966   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:37.412977   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:37.412993   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:37.427750   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:37.427776   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:37.500904   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:37.500928   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:37.500945   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:37.583204   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:37.583246   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:37.623477   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:37.623512   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.179798   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:40.194295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:40.194372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:40.229731   73230 cri.go:89] found id: ""
	I0906 20:07:40.229768   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.229779   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:40.229787   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:40.229848   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:40.275909   73230 cri.go:89] found id: ""
	I0906 20:07:40.275943   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.275956   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:40.275964   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:40.276049   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:40.316552   73230 cri.go:89] found id: ""
	I0906 20:07:40.316585   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.316594   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:40.316599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:40.316647   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:40.355986   73230 cri.go:89] found id: ""
	I0906 20:07:40.356017   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.356028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:40.356036   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:40.356095   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:40.396486   73230 cri.go:89] found id: ""
	I0906 20:07:40.396522   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.396535   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:40.396544   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:40.396609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:40.440311   73230 cri.go:89] found id: ""
	I0906 20:07:40.440338   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.440346   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:40.440352   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:40.440414   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:37.346532   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.839521   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.156737   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.156967   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:39.691703   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:41.691972   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:40.476753   73230 cri.go:89] found id: ""
	I0906 20:07:40.476781   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.476790   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:40.476797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:40.476844   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:40.514462   73230 cri.go:89] found id: ""
	I0906 20:07:40.514489   73230 logs.go:276] 0 containers: []
	W0906 20:07:40.514500   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:40.514511   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:40.514527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:40.553670   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:40.553700   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:40.608304   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:40.608343   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:40.622486   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:40.622514   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:40.699408   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:40.699434   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:40.699451   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.278892   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:43.292455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:43.292526   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:43.328900   73230 cri.go:89] found id: ""
	I0906 20:07:43.328929   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.328940   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:43.328948   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:43.329009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:43.366728   73230 cri.go:89] found id: ""
	I0906 20:07:43.366754   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.366762   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:43.366768   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:43.366817   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:43.401566   73230 cri.go:89] found id: ""
	I0906 20:07:43.401590   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.401599   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:43.401604   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:43.401650   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:43.437022   73230 cri.go:89] found id: ""
	I0906 20:07:43.437051   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.437063   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:43.437072   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:43.437140   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:43.473313   73230 cri.go:89] found id: ""
	I0906 20:07:43.473342   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.473354   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:43.473360   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:43.473420   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:43.513590   73230 cri.go:89] found id: ""
	I0906 20:07:43.513616   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.513624   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:43.513630   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:43.513690   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:43.549974   73230 cri.go:89] found id: ""
	I0906 20:07:43.550011   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.550025   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:43.550032   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:43.550100   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:43.592386   73230 cri.go:89] found id: ""
	I0906 20:07:43.592426   73230 logs.go:276] 0 containers: []
	W0906 20:07:43.592444   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:43.592454   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:43.592482   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:43.607804   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:43.607841   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:43.679533   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:43.679568   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:43.679580   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:43.762111   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:43.762145   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:43.802883   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:43.802908   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:42.340252   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:44.838648   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.838831   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.157956   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.657410   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:43.693014   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:45.693640   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.191509   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:46.358429   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:46.371252   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:46.371326   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:46.406397   73230 cri.go:89] found id: ""
	I0906 20:07:46.406420   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.406430   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:46.406437   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:46.406496   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:46.452186   73230 cri.go:89] found id: ""
	I0906 20:07:46.452209   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.452218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:46.452223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:46.452288   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:46.489418   73230 cri.go:89] found id: ""
	I0906 20:07:46.489443   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.489454   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:46.489461   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:46.489523   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:46.529650   73230 cri.go:89] found id: ""
	I0906 20:07:46.529679   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.529690   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:46.529698   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:46.529760   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:46.566429   73230 cri.go:89] found id: ""
	I0906 20:07:46.566454   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.566466   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:46.566474   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:46.566539   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:46.604999   73230 cri.go:89] found id: ""
	I0906 20:07:46.605026   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.605034   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:46.605040   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:46.605085   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:46.643116   73230 cri.go:89] found id: ""
	I0906 20:07:46.643144   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.643155   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:46.643162   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:46.643222   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:46.679734   73230 cri.go:89] found id: ""
	I0906 20:07:46.679756   73230 logs.go:276] 0 containers: []
	W0906 20:07:46.679764   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:46.679772   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:46.679784   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:46.736380   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:46.736430   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:46.750649   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:46.750681   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:46.833098   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:46.833130   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:46.833146   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:46.912223   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:46.912267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.453662   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:49.466520   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:49.466585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:49.508009   73230 cri.go:89] found id: ""
	I0906 20:07:49.508038   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.508049   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:49.508056   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:49.508119   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:49.545875   73230 cri.go:89] found id: ""
	I0906 20:07:49.545900   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.545911   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:49.545918   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:49.545978   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:49.584899   73230 cri.go:89] found id: ""
	I0906 20:07:49.584926   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.584933   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:49.584940   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:49.585001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:49.621044   73230 cri.go:89] found id: ""
	I0906 20:07:49.621073   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.621085   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:49.621092   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:49.621146   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:49.657074   73230 cri.go:89] found id: ""
	I0906 20:07:49.657099   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.657108   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:49.657115   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:49.657174   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:49.693734   73230 cri.go:89] found id: ""
	I0906 20:07:49.693759   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.693767   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:49.693773   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:49.693827   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:49.729920   73230 cri.go:89] found id: ""
	I0906 20:07:49.729950   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.729960   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:49.729965   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:49.730014   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:49.765282   73230 cri.go:89] found id: ""
	I0906 20:07:49.765313   73230 logs.go:276] 0 containers: []
	W0906 20:07:49.765324   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:49.765335   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:49.765350   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:49.842509   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:49.842531   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:49.842543   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:49.920670   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:49.920704   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:49.961193   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:49.961220   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:50.014331   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:50.014366   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:48.839877   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:51.339381   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:48.156290   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.157337   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:50.692055   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:53.191487   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.529758   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:52.543533   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:52.543596   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:52.582802   73230 cri.go:89] found id: ""
	I0906 20:07:52.582826   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.582838   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:52.582845   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:52.582909   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:52.625254   73230 cri.go:89] found id: ""
	I0906 20:07:52.625287   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.625308   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:52.625317   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:52.625383   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:52.660598   73230 cri.go:89] found id: ""
	I0906 20:07:52.660621   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.660632   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:52.660640   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:52.660703   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:52.702980   73230 cri.go:89] found id: ""
	I0906 20:07:52.703004   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.703014   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:52.703021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:52.703082   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:52.740361   73230 cri.go:89] found id: ""
	I0906 20:07:52.740387   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.740394   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:52.740400   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:52.740447   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:52.780011   73230 cri.go:89] found id: ""
	I0906 20:07:52.780043   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.780056   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:52.780063   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:52.780123   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:52.825546   73230 cri.go:89] found id: ""
	I0906 20:07:52.825583   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.825595   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:52.825602   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:52.825659   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:52.864347   73230 cri.go:89] found id: ""
	I0906 20:07:52.864381   73230 logs.go:276] 0 containers: []
	W0906 20:07:52.864393   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:52.864403   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:52.864417   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:52.943041   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:52.943077   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:52.986158   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:52.986185   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:53.039596   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:53.039635   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:53.054265   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:53.054295   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:53.125160   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:53.339887   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.839233   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:52.657521   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.157101   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.192803   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.692328   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:55.626058   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:55.639631   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:55.639705   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:55.677283   73230 cri.go:89] found id: ""
	I0906 20:07:55.677304   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.677312   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:55.677317   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:55.677372   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:55.714371   73230 cri.go:89] found id: ""
	I0906 20:07:55.714402   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.714414   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:55.714422   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:55.714509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:55.753449   73230 cri.go:89] found id: ""
	I0906 20:07:55.753487   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.753500   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:55.753507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:55.753575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:55.792955   73230 cri.go:89] found id: ""
	I0906 20:07:55.792987   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.792999   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:55.793006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:55.793074   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:55.827960   73230 cri.go:89] found id: ""
	I0906 20:07:55.827985   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.827996   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:55.828003   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:55.828052   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:55.867742   73230 cri.go:89] found id: ""
	I0906 20:07:55.867765   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.867778   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:55.867785   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:55.867849   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:55.907328   73230 cri.go:89] found id: ""
	I0906 20:07:55.907352   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.907359   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:55.907365   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:55.907424   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:55.946057   73230 cri.go:89] found id: ""
	I0906 20:07:55.946091   73230 logs.go:276] 0 containers: []
	W0906 20:07:55.946099   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:55.946108   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:55.946119   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:56.033579   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:56.033598   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:56.033611   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:56.116337   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:56.116372   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:56.163397   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:56.163428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:56.217189   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:56.217225   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:58.736147   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:07:58.749729   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:07:58.749833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:07:58.786375   73230 cri.go:89] found id: ""
	I0906 20:07:58.786399   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.786406   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:07:58.786412   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:07:58.786460   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:07:58.825188   73230 cri.go:89] found id: ""
	I0906 20:07:58.825210   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.825218   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:07:58.825223   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:07:58.825271   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:07:58.866734   73230 cri.go:89] found id: ""
	I0906 20:07:58.866756   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.866764   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:07:58.866769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:07:58.866823   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:07:58.909742   73230 cri.go:89] found id: ""
	I0906 20:07:58.909774   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.909785   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:07:58.909793   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:07:58.909850   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:07:58.950410   73230 cri.go:89] found id: ""
	I0906 20:07:58.950438   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.950447   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:07:58.950452   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:07:58.950500   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:07:58.987431   73230 cri.go:89] found id: ""
	I0906 20:07:58.987454   73230 logs.go:276] 0 containers: []
	W0906 20:07:58.987462   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:07:58.987468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:07:58.987518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:07:59.023432   73230 cri.go:89] found id: ""
	I0906 20:07:59.023462   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.023474   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:07:59.023482   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:07:59.023544   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:07:59.057695   73230 cri.go:89] found id: ""
	I0906 20:07:59.057724   73230 logs.go:276] 0 containers: []
	W0906 20:07:59.057734   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:07:59.057743   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:07:59.057755   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:07:59.109634   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:07:59.109671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:07:59.125436   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:07:59.125479   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:07:59.202018   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:07:59.202040   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:07:59.202054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:07:59.281418   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:07:59.281456   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:07:58.339751   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.842794   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:07:57.658145   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.155679   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.157913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:00.192179   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:02.193068   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:01.823947   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:01.839055   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:01.839115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:01.876178   73230 cri.go:89] found id: ""
	I0906 20:08:01.876206   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.876215   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:01.876220   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:01.876274   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:01.912000   73230 cri.go:89] found id: ""
	I0906 20:08:01.912028   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.912038   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:01.912045   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:01.912107   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:01.948382   73230 cri.go:89] found id: ""
	I0906 20:08:01.948412   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.948420   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:01.948426   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:01.948474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:01.982991   73230 cri.go:89] found id: ""
	I0906 20:08:01.983019   73230 logs.go:276] 0 containers: []
	W0906 20:08:01.983028   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:01.983033   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:01.983080   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:02.016050   73230 cri.go:89] found id: ""
	I0906 20:08:02.016076   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.016085   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:02.016091   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:02.016151   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:02.051087   73230 cri.go:89] found id: ""
	I0906 20:08:02.051125   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.051137   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:02.051150   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:02.051214   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:02.093230   73230 cri.go:89] found id: ""
	I0906 20:08:02.093254   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.093263   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:02.093268   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:02.093323   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:02.130580   73230 cri.go:89] found id: ""
	I0906 20:08:02.130609   73230 logs.go:276] 0 containers: []
	W0906 20:08:02.130619   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:02.130629   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:02.130644   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:02.183192   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:02.183231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:02.199079   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:02.199110   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:02.274259   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:02.274279   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:02.274303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:02.356198   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:02.356234   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:04.899180   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:04.912879   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:04.912955   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:04.950598   73230 cri.go:89] found id: ""
	I0906 20:08:04.950632   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.950642   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:04.950656   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:04.950713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:04.986474   73230 cri.go:89] found id: ""
	I0906 20:08:04.986504   73230 logs.go:276] 0 containers: []
	W0906 20:08:04.986513   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:04.986519   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:04.986570   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:05.025837   73230 cri.go:89] found id: ""
	I0906 20:08:05.025868   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.025877   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:05.025884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:05.025934   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:05.063574   73230 cri.go:89] found id: ""
	I0906 20:08:05.063613   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.063622   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:05.063628   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:05.063674   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:05.101341   73230 cri.go:89] found id: ""
	I0906 20:08:05.101371   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.101383   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:05.101390   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:05.101461   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:05.148551   73230 cri.go:89] found id: ""
	I0906 20:08:05.148580   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.148591   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:05.148599   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:05.148668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:05.186907   73230 cri.go:89] found id: ""
	I0906 20:08:05.186935   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.186945   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:05.186953   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:05.187019   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:05.226237   73230 cri.go:89] found id: ""
	I0906 20:08:05.226265   73230 logs.go:276] 0 containers: []
	W0906 20:08:05.226275   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:05.226287   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:05.226300   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:05.242892   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:05.242925   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:05.317797   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:05.317824   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:05.317839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:05.400464   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:05.400500   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:05.442632   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:05.442657   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:03.340541   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:05.840156   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.655913   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:06.657424   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:04.691255   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.191739   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:07.998033   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:08.012363   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:08.012441   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:08.048816   73230 cri.go:89] found id: ""
	I0906 20:08:08.048847   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.048876   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:08.048884   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:08.048947   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:08.109623   73230 cri.go:89] found id: ""
	I0906 20:08:08.109650   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.109661   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:08.109668   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:08.109730   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:08.145405   73230 cri.go:89] found id: ""
	I0906 20:08:08.145432   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.145443   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:08.145451   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:08.145514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:08.187308   73230 cri.go:89] found id: ""
	I0906 20:08:08.187344   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.187355   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:08.187362   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:08.187422   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:08.228782   73230 cri.go:89] found id: ""
	I0906 20:08:08.228815   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.228826   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:08.228833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:08.228918   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:08.269237   73230 cri.go:89] found id: ""
	I0906 20:08:08.269266   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.269276   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:08.269285   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:08.269351   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:08.305115   73230 cri.go:89] found id: ""
	I0906 20:08:08.305141   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.305149   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:08.305155   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:08.305206   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:08.345442   73230 cri.go:89] found id: ""
	I0906 20:08:08.345472   73230 logs.go:276] 0 containers: []
	W0906 20:08:08.345483   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:08.345494   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:08.345510   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:08.396477   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:08.396518   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:08.410978   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:08.411002   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:08.486220   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:08.486247   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:08.486265   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:08.574138   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:08.574190   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:08.339280   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:10.340142   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.156809   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.160037   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:09.192303   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.192456   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.192684   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:11.117545   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:11.131884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:11.131944   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:11.169481   73230 cri.go:89] found id: ""
	I0906 20:08:11.169507   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.169518   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:11.169525   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:11.169590   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:11.211068   73230 cri.go:89] found id: ""
	I0906 20:08:11.211092   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.211100   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:11.211105   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:11.211157   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:11.250526   73230 cri.go:89] found id: ""
	I0906 20:08:11.250560   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.250574   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:11.250580   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:11.250627   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:11.289262   73230 cri.go:89] found id: ""
	I0906 20:08:11.289284   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.289292   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:11.289299   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:11.289346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:11.335427   73230 cri.go:89] found id: ""
	I0906 20:08:11.335456   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.335467   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:11.335475   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:11.335535   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:11.375481   73230 cri.go:89] found id: ""
	I0906 20:08:11.375509   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.375518   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:11.375524   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:11.375575   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:11.416722   73230 cri.go:89] found id: ""
	I0906 20:08:11.416748   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.416758   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:11.416765   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:11.416830   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:11.452986   73230 cri.go:89] found id: ""
	I0906 20:08:11.453019   73230 logs.go:276] 0 containers: []
	W0906 20:08:11.453030   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:11.453042   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:11.453059   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:11.466435   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:11.466461   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:11.545185   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:11.545212   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:11.545231   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:11.627390   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:11.627422   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:11.674071   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:11.674098   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.225887   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:14.242121   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:14.242200   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:14.283024   73230 cri.go:89] found id: ""
	I0906 20:08:14.283055   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.283067   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:14.283074   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:14.283135   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:14.325357   73230 cri.go:89] found id: ""
	I0906 20:08:14.325379   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.325387   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:14.325392   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:14.325455   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:14.362435   73230 cri.go:89] found id: ""
	I0906 20:08:14.362459   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.362467   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:14.362473   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:14.362537   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:14.398409   73230 cri.go:89] found id: ""
	I0906 20:08:14.398441   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.398450   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:14.398455   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:14.398509   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:14.434902   73230 cri.go:89] found id: ""
	I0906 20:08:14.434934   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.434943   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:14.434950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:14.435009   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:14.476605   73230 cri.go:89] found id: ""
	I0906 20:08:14.476635   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.476647   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:14.476655   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:14.476717   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:14.533656   73230 cri.go:89] found id: ""
	I0906 20:08:14.533681   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.533690   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:14.533696   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:14.533753   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:14.599661   73230 cri.go:89] found id: ""
	I0906 20:08:14.599685   73230 logs.go:276] 0 containers: []
	W0906 20:08:14.599693   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:14.599702   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:14.599715   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:14.657680   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:14.657712   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:14.671594   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:14.671624   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:14.747945   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:14.747969   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:14.747979   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:14.829021   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:14.829057   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:12.838805   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:14.839569   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:13.659405   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:16.156840   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:15.692205   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.693709   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:17.373569   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:17.388910   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:17.388987   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:17.428299   73230 cri.go:89] found id: ""
	I0906 20:08:17.428335   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.428347   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:17.428354   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:17.428419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:17.464660   73230 cri.go:89] found id: ""
	I0906 20:08:17.464685   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.464692   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:17.464697   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:17.464758   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:17.500018   73230 cri.go:89] found id: ""
	I0906 20:08:17.500047   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.500059   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:17.500067   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:17.500130   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:17.536345   73230 cri.go:89] found id: ""
	I0906 20:08:17.536375   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.536386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:17.536394   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:17.536456   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:17.574668   73230 cri.go:89] found id: ""
	I0906 20:08:17.574696   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.574707   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:17.574715   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:17.574780   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:17.611630   73230 cri.go:89] found id: ""
	I0906 20:08:17.611653   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.611663   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:17.611669   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:17.611713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:17.647610   73230 cri.go:89] found id: ""
	I0906 20:08:17.647639   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.647649   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:17.647657   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:17.647724   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:17.686204   73230 cri.go:89] found id: ""
	I0906 20:08:17.686233   73230 logs.go:276] 0 containers: []
	W0906 20:08:17.686246   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:17.686260   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:17.686273   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:17.702040   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:17.702069   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:17.775033   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:17.775058   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:17.775074   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:17.862319   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:17.862359   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:17.905567   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:17.905604   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:17.339116   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:19.839554   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:21.839622   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:18.157104   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.657604   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.191024   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:22.192687   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:20.457191   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:20.471413   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:20.471474   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:20.533714   73230 cri.go:89] found id: ""
	I0906 20:08:20.533749   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.533765   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:20.533772   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:20.533833   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:20.580779   73230 cri.go:89] found id: ""
	I0906 20:08:20.580811   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.580823   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:20.580830   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:20.580902   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:20.619729   73230 cri.go:89] found id: ""
	I0906 20:08:20.619755   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.619763   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:20.619769   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:20.619816   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:20.661573   73230 cri.go:89] found id: ""
	I0906 20:08:20.661599   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.661606   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:20.661612   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:20.661664   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:20.709409   73230 cri.go:89] found id: ""
	I0906 20:08:20.709443   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.709455   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:20.709463   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:20.709515   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:20.746743   73230 cri.go:89] found id: ""
	I0906 20:08:20.746783   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.746808   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:20.746816   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:20.746891   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:20.788129   73230 cri.go:89] found id: ""
	I0906 20:08:20.788155   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.788164   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:20.788170   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:20.788217   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:20.825115   73230 cri.go:89] found id: ""
	I0906 20:08:20.825139   73230 logs.go:276] 0 containers: []
	W0906 20:08:20.825147   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:20.825156   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:20.825167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:20.880975   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:20.881013   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:20.895027   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:20.895061   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:20.972718   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:20.972739   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:20.972754   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:21.053062   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:21.053096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:23.595439   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:23.612354   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:23.612419   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:23.654479   73230 cri.go:89] found id: ""
	I0906 20:08:23.654508   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.654519   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:23.654526   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:23.654591   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:23.690061   73230 cri.go:89] found id: ""
	I0906 20:08:23.690092   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.690103   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:23.690112   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:23.690173   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:23.726644   73230 cri.go:89] found id: ""
	I0906 20:08:23.726670   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.726678   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:23.726684   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:23.726744   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:23.763348   73230 cri.go:89] found id: ""
	I0906 20:08:23.763378   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.763386   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:23.763391   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:23.763452   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:23.799260   73230 cri.go:89] found id: ""
	I0906 20:08:23.799290   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.799299   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:23.799305   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:23.799359   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:23.843438   73230 cri.go:89] found id: ""
	I0906 20:08:23.843470   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.843481   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:23.843489   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:23.843558   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:23.879818   73230 cri.go:89] found id: ""
	I0906 20:08:23.879847   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.879856   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:23.879867   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:23.879933   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:23.916182   73230 cri.go:89] found id: ""
	I0906 20:08:23.916207   73230 logs.go:276] 0 containers: []
	W0906 20:08:23.916220   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:23.916229   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:23.916240   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:23.987003   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:23.987022   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:23.987033   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:24.073644   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:24.073684   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:24.118293   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:24.118328   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:24.172541   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:24.172582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:23.840441   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.338539   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:23.155661   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:25.155855   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:27.157624   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:24.692350   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.692534   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:26.687747   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:26.702174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:26.702238   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:26.740064   73230 cri.go:89] found id: ""
	I0906 20:08:26.740093   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.740101   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:26.740108   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:26.740158   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:26.775198   73230 cri.go:89] found id: ""
	I0906 20:08:26.775227   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.775237   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:26.775244   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:26.775303   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:26.808850   73230 cri.go:89] found id: ""
	I0906 20:08:26.808892   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.808903   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:26.808915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:26.808974   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:26.842926   73230 cri.go:89] found id: ""
	I0906 20:08:26.842953   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.842964   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:26.842972   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:26.843031   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:26.878621   73230 cri.go:89] found id: ""
	I0906 20:08:26.878649   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.878658   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:26.878664   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:26.878713   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:26.921816   73230 cri.go:89] found id: ""
	I0906 20:08:26.921862   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.921875   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:26.921884   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:26.921952   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:26.960664   73230 cri.go:89] found id: ""
	I0906 20:08:26.960692   73230 logs.go:276] 0 containers: []
	W0906 20:08:26.960702   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:26.960709   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:26.960771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:27.004849   73230 cri.go:89] found id: ""
	I0906 20:08:27.004904   73230 logs.go:276] 0 containers: []
	W0906 20:08:27.004913   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:27.004922   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:27.004934   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:27.056237   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:27.056267   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:27.071882   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:27.071904   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:27.143927   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:27.143949   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:27.143961   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:27.223901   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:27.223935   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:29.766615   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:29.780295   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:29.780367   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:29.817745   73230 cri.go:89] found id: ""
	I0906 20:08:29.817775   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.817784   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:29.817790   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:29.817852   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:29.855536   73230 cri.go:89] found id: ""
	I0906 20:08:29.855559   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.855567   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:29.855572   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:29.855628   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:29.895043   73230 cri.go:89] found id: ""
	I0906 20:08:29.895092   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.895104   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:29.895111   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:29.895178   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:29.939225   73230 cri.go:89] found id: ""
	I0906 20:08:29.939248   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.939256   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:29.939262   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:29.939331   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:29.974166   73230 cri.go:89] found id: ""
	I0906 20:08:29.974190   73230 logs.go:276] 0 containers: []
	W0906 20:08:29.974198   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:29.974203   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:29.974258   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:30.009196   73230 cri.go:89] found id: ""
	I0906 20:08:30.009226   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.009237   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:30.009245   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:30.009310   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:30.043939   73230 cri.go:89] found id: ""
	I0906 20:08:30.043962   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.043970   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:30.043976   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:30.044023   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:30.080299   73230 cri.go:89] found id: ""
	I0906 20:08:30.080328   73230 logs.go:276] 0 containers: []
	W0906 20:08:30.080336   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:30.080345   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:30.080356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:30.131034   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:30.131068   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:30.145502   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:30.145536   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:30.219941   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:30.219963   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:30.219978   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:30.307958   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:30.307995   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:28.839049   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.338815   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.656748   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.657112   72441 pod_ready.go:103] pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:29.192284   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:31.193181   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.854002   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:32.867937   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:32.867998   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:32.906925   73230 cri.go:89] found id: ""
	I0906 20:08:32.906957   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.906969   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:32.906976   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:32.907038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:32.946662   73230 cri.go:89] found id: ""
	I0906 20:08:32.946691   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.946702   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:32.946710   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:32.946771   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:32.981908   73230 cri.go:89] found id: ""
	I0906 20:08:32.981936   73230 logs.go:276] 0 containers: []
	W0906 20:08:32.981944   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:32.981950   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:32.982001   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:33.014902   73230 cri.go:89] found id: ""
	I0906 20:08:33.014930   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.014939   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:33.014945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:33.015055   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:33.051265   73230 cri.go:89] found id: ""
	I0906 20:08:33.051290   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.051298   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:33.051310   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:33.051363   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:33.085436   73230 cri.go:89] found id: ""
	I0906 20:08:33.085468   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.085480   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:33.085487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:33.085552   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:33.121483   73230 cri.go:89] found id: ""
	I0906 20:08:33.121509   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.121517   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:33.121523   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:33.121578   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:33.159883   73230 cri.go:89] found id: ""
	I0906 20:08:33.159915   73230 logs.go:276] 0 containers: []
	W0906 20:08:33.159926   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:33.159937   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:33.159953   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:33.174411   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:33.174442   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:33.243656   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:33.243694   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:33.243710   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:33.321782   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:33.321823   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:33.363299   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:33.363335   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:33.339645   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.839545   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:32.650358   72441 pod_ready.go:82] duration metric: took 4m0.000296679s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:32.650386   72441 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-gtg94" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:32.650410   72441 pod_ready.go:39] duration metric: took 4m12.042795571s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:32.650440   72441 kubeadm.go:597] duration metric: took 4m19.97234293s to restartPrimaryControlPlane
	W0906 20:08:32.650505   72441 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:32.650542   72441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:33.692877   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:36.192090   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:38.192465   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:35.916159   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:35.929190   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:35.929265   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:35.962853   73230 cri.go:89] found id: ""
	I0906 20:08:35.962890   73230 logs.go:276] 0 containers: []
	W0906 20:08:35.962901   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:35.962909   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:35.962969   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:36.000265   73230 cri.go:89] found id: ""
	I0906 20:08:36.000309   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.000318   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:36.000324   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:36.000374   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:36.042751   73230 cri.go:89] found id: ""
	I0906 20:08:36.042781   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.042792   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:36.042800   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:36.042859   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:36.077922   73230 cri.go:89] found id: ""
	I0906 20:08:36.077957   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.077967   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:36.077975   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:36.078038   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:36.114890   73230 cri.go:89] found id: ""
	I0906 20:08:36.114926   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.114937   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:36.114945   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:36.114997   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:36.148058   73230 cri.go:89] found id: ""
	I0906 20:08:36.148089   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.148101   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:36.148108   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:36.148167   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:36.187334   73230 cri.go:89] found id: ""
	I0906 20:08:36.187361   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.187371   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:36.187379   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:36.187498   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:36.221295   73230 cri.go:89] found id: ""
	I0906 20:08:36.221331   73230 logs.go:276] 0 containers: []
	W0906 20:08:36.221342   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:36.221353   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:36.221367   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:36.273489   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:36.273527   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:36.287975   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:36.288005   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:36.366914   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:36.366937   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:36.366950   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:36.446582   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:36.446619   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.987075   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:39.001051   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:39.001113   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:39.038064   73230 cri.go:89] found id: ""
	I0906 20:08:39.038093   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.038103   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:39.038110   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:39.038175   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:39.075759   73230 cri.go:89] found id: ""
	I0906 20:08:39.075788   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.075799   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:39.075805   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:39.075866   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:39.113292   73230 cri.go:89] found id: ""
	I0906 20:08:39.113320   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.113331   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:39.113339   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:39.113404   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:39.157236   73230 cri.go:89] found id: ""
	I0906 20:08:39.157269   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.157281   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:39.157289   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:39.157362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:39.195683   73230 cri.go:89] found id: ""
	I0906 20:08:39.195704   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.195712   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:39.195717   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:39.195763   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:39.234865   73230 cri.go:89] found id: ""
	I0906 20:08:39.234894   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.234903   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:39.234909   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:39.234961   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:39.269946   73230 cri.go:89] found id: ""
	I0906 20:08:39.269975   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.269983   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:39.269989   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:39.270034   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:39.306184   73230 cri.go:89] found id: ""
	I0906 20:08:39.306214   73230 logs.go:276] 0 containers: []
	W0906 20:08:39.306225   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:39.306235   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:39.306249   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:39.357887   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:39.357920   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:39.371736   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:39.371767   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:39.445674   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:39.445695   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:39.445708   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:39.525283   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:39.525316   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:38.343370   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.839247   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:40.691846   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.694807   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:42.069066   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:42.083229   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:42.083313   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:42.124243   73230 cri.go:89] found id: ""
	I0906 20:08:42.124267   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.124275   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:42.124280   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:42.124330   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:42.162070   73230 cri.go:89] found id: ""
	I0906 20:08:42.162102   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.162113   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:42.162120   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:42.162183   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:42.199161   73230 cri.go:89] found id: ""
	I0906 20:08:42.199191   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.199201   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:42.199208   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:42.199266   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:42.236956   73230 cri.go:89] found id: ""
	I0906 20:08:42.236980   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.236991   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:42.236996   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:42.237068   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:42.272299   73230 cri.go:89] found id: ""
	I0906 20:08:42.272328   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.272336   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:42.272341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:42.272400   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:42.310280   73230 cri.go:89] found id: ""
	I0906 20:08:42.310304   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.310312   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:42.310317   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:42.310362   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:42.345850   73230 cri.go:89] found id: ""
	I0906 20:08:42.345873   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.345881   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:42.345887   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:42.345937   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:42.380785   73230 cri.go:89] found id: ""
	I0906 20:08:42.380812   73230 logs.go:276] 0 containers: []
	W0906 20:08:42.380820   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:42.380830   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:42.380843   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.435803   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:42.435839   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:42.450469   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:42.450498   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:42.521565   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:42.521587   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:42.521599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:42.595473   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:42.595508   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:45.136985   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:45.150468   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:45.150540   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:45.186411   73230 cri.go:89] found id: ""
	I0906 20:08:45.186440   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.186448   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:45.186454   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:45.186521   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:45.224463   73230 cri.go:89] found id: ""
	I0906 20:08:45.224495   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.224506   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:45.224513   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:45.224568   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:45.262259   73230 cri.go:89] found id: ""
	I0906 20:08:45.262286   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.262295   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:45.262301   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:45.262357   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:45.299463   73230 cri.go:89] found id: ""
	I0906 20:08:45.299492   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.299501   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:45.299507   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:45.299561   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:45.336125   73230 cri.go:89] found id: ""
	I0906 20:08:45.336153   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.336162   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:45.336168   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:45.336216   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:45.370397   73230 cri.go:89] found id: ""
	I0906 20:08:45.370427   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.370439   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:45.370448   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:45.370518   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:45.406290   73230 cri.go:89] found id: ""
	I0906 20:08:45.406322   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.406333   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:45.406341   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:45.406402   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:45.441560   73230 cri.go:89] found id: ""
	I0906 20:08:45.441592   73230 logs.go:276] 0 containers: []
	W0906 20:08:45.441603   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:45.441614   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:45.441627   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:42.840127   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.349331   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.192059   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:47.691416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:45.508769   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:45.508811   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:45.523659   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:45.523696   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:45.595544   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:45.595567   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:45.595582   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:45.676060   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:45.676096   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:48.216490   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:48.230021   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:48.230093   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:48.267400   73230 cri.go:89] found id: ""
	I0906 20:08:48.267433   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.267444   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:48.267451   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:48.267519   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:48.314694   73230 cri.go:89] found id: ""
	I0906 20:08:48.314722   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.314731   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:48.314739   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:48.314805   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:48.358861   73230 cri.go:89] found id: ""
	I0906 20:08:48.358895   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.358906   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:48.358915   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:48.358990   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:48.398374   73230 cri.go:89] found id: ""
	I0906 20:08:48.398400   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.398410   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:48.398416   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:48.398488   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:48.438009   73230 cri.go:89] found id: ""
	I0906 20:08:48.438039   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.438050   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:48.438058   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:48.438115   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:48.475970   73230 cri.go:89] found id: ""
	I0906 20:08:48.475998   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.476007   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:48.476013   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:48.476071   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:48.512191   73230 cri.go:89] found id: ""
	I0906 20:08:48.512220   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.512230   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:48.512237   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:48.512299   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:48.547820   73230 cri.go:89] found id: ""
	I0906 20:08:48.547850   73230 logs.go:276] 0 containers: []
	W0906 20:08:48.547861   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:48.547872   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:48.547886   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:48.616962   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:48.616997   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:48.631969   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:48.631998   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:48.717025   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:48.717043   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:48.717054   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:48.796131   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:48.796167   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:47.838558   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.839063   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.839099   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:49.693239   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:52.191416   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:51.342030   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:51.355761   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:51.355845   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:51.395241   73230 cri.go:89] found id: ""
	I0906 20:08:51.395272   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.395283   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:51.395290   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:51.395350   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:51.433860   73230 cri.go:89] found id: ""
	I0906 20:08:51.433888   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.433897   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:51.433904   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:51.433968   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:51.475568   73230 cri.go:89] found id: ""
	I0906 20:08:51.475598   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.475608   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:51.475615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:51.475678   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:51.512305   73230 cri.go:89] found id: ""
	I0906 20:08:51.512329   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.512337   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:51.512342   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:51.512391   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:51.545796   73230 cri.go:89] found id: ""
	I0906 20:08:51.545819   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.545827   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:51.545833   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:51.545884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:51.578506   73230 cri.go:89] found id: ""
	I0906 20:08:51.578531   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.578539   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:51.578545   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:51.578609   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:51.616571   73230 cri.go:89] found id: ""
	I0906 20:08:51.616596   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.616609   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:51.616615   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:51.616660   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:51.651542   73230 cri.go:89] found id: ""
	I0906 20:08:51.651566   73230 logs.go:276] 0 containers: []
	W0906 20:08:51.651580   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:51.651588   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:51.651599   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:51.705160   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:51.705193   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:51.719450   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:51.719477   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:51.789775   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:51.789796   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:51.789809   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:51.870123   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:51.870158   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.411818   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:54.425759   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:54.425818   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:54.467920   73230 cri.go:89] found id: ""
	I0906 20:08:54.467943   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.467951   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:54.467956   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:54.468008   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:54.508324   73230 cri.go:89] found id: ""
	I0906 20:08:54.508349   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.508357   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:54.508363   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:54.508410   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:54.544753   73230 cri.go:89] found id: ""
	I0906 20:08:54.544780   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.544790   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:54.544797   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:54.544884   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:54.581407   73230 cri.go:89] found id: ""
	I0906 20:08:54.581436   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.581446   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:54.581453   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:54.581514   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:54.618955   73230 cri.go:89] found id: ""
	I0906 20:08:54.618986   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.618998   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:54.619006   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:54.619065   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:54.656197   73230 cri.go:89] found id: ""
	I0906 20:08:54.656229   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.656248   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:54.656255   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:54.656316   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:54.697499   73230 cri.go:89] found id: ""
	I0906 20:08:54.697536   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.697544   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:54.697549   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:54.697600   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:54.734284   73230 cri.go:89] found id: ""
	I0906 20:08:54.734313   73230 logs.go:276] 0 containers: []
	W0906 20:08:54.734331   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:54.734342   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:54.734356   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:54.811079   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:54.811100   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:54.811111   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:54.887309   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:54.887346   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:54.930465   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:54.930499   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:55.000240   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:55.000303   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:54.339076   72867 pod_ready.go:103] pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:54.833352   72867 pod_ready.go:82] duration metric: took 4m0.000854511s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" ...
	E0906 20:08:54.833398   72867 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dds56" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:08:54.833423   72867 pod_ready.go:39] duration metric: took 4m14.79685184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:08:54.833458   72867 kubeadm.go:597] duration metric: took 4m22.254900492s to restartPrimaryControlPlane
	W0906 20:08:54.833525   72867 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:08:54.833576   72867 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:08:54.192038   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:56.192120   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:58.193505   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:08:57.530956   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:08:57.544056   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:08:57.544136   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:08:57.584492   73230 cri.go:89] found id: ""
	I0906 20:08:57.584519   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.584528   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:08:57.584534   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:08:57.584585   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:08:57.620220   73230 cri.go:89] found id: ""
	I0906 20:08:57.620250   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.620259   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:08:57.620265   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:08:57.620321   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:08:57.655245   73230 cri.go:89] found id: ""
	I0906 20:08:57.655268   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.655283   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:08:57.655288   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:08:57.655346   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:08:57.690439   73230 cri.go:89] found id: ""
	I0906 20:08:57.690470   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.690481   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:08:57.690487   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:08:57.690551   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:08:57.728179   73230 cri.go:89] found id: ""
	I0906 20:08:57.728206   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.728214   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:08:57.728221   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:08:57.728270   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:08:57.763723   73230 cri.go:89] found id: ""
	I0906 20:08:57.763752   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.763761   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:08:57.763767   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:08:57.763825   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:08:57.799836   73230 cri.go:89] found id: ""
	I0906 20:08:57.799861   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.799869   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:08:57.799876   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:08:57.799922   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:08:57.834618   73230 cri.go:89] found id: ""
	I0906 20:08:57.834644   73230 logs.go:276] 0 containers: []
	W0906 20:08:57.834651   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:08:57.834660   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:08:57.834671   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:08:57.887297   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:08:57.887331   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:08:57.901690   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:08:57.901717   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:08:57.969179   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:08:57.969209   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:08:57.969223   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0906 20:08:58.052527   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:08:58.052642   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:08:58.870446   72441 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.219876198s)
	I0906 20:08:58.870530   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:08:58.888197   72441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:08:58.899185   72441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:08:58.909740   72441 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:08:58.909762   72441 kubeadm.go:157] found existing configuration files:
	
	I0906 20:08:58.909806   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:08:58.919589   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:08:58.919646   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:08:58.930386   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:08:58.940542   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:08:58.940621   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:08:58.951673   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.963471   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:08:58.963545   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:08:58.974638   72441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:08:58.984780   72441 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:08:58.984843   72441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:08:58.995803   72441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:08:59.046470   72441 kubeadm.go:310] W0906 20:08:59.003226    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.047297   72441 kubeadm.go:310] W0906 20:08:59.004193    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:08:59.166500   72441 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:00.691499   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:02.692107   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:00.593665   73230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:00.608325   73230 kubeadm.go:597] duration metric: took 4m4.153407014s to restartPrimaryControlPlane
	W0906 20:09:00.608399   73230 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:00.608428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:05.878028   73230 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.269561172s)
	I0906 20:09:05.878112   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:05.893351   73230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:05.904668   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:05.915560   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:05.915583   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:05.915633   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:09:05.926566   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:05.926625   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:05.937104   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:09:05.946406   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:05.946467   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:05.956203   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.965691   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:05.965751   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:05.976210   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:09:05.986104   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:05.986174   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:05.996282   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:06.068412   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:09:06.068507   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:06.213882   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:06.214044   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:06.214191   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:09:06.406793   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.067295   72441 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:07.067370   72441 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:07.067449   72441 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:07.067595   72441 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:07.067737   72441 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:07.067795   72441 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:07.069381   72441 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:07.069477   72441 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:07.069559   72441 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:07.069652   72441 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:07.069733   72441 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:07.069825   72441 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:07.069898   72441 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:07.069981   72441 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:07.070068   72441 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:07.070178   72441 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:07.070279   72441 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:07.070349   72441 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:07.070424   72441 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:07.070494   72441 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:07.070592   72441 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:07.070669   72441 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.070755   72441 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.070828   72441 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.070916   72441 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.070972   72441 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:07.072214   72441 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.072317   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.072399   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.072487   72441 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.072613   72441 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.072685   72441 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.072719   72441 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.072837   72441 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:07.072977   72441 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:07.073063   72441 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.515053ms
	I0906 20:09:07.073178   72441 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:07.073257   72441 kubeadm.go:310] [api-check] The API server is healthy after 5.001748851s
	I0906 20:09:07.073410   72441 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:07.073558   72441 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:07.073650   72441 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:07.073860   72441 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-458066 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:07.073936   72441 kubeadm.go:310] [bootstrap-token] Using token: 3t2lf6.w44vkc4kfppuo2gp
	I0906 20:09:07.075394   72441 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:07.075524   72441 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:07.075621   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:07.075738   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:07.075905   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:07.076003   72441 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:07.076094   72441 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:07.076222   72441 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:07.076397   72441 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:07.076486   72441 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:07.076502   72441 kubeadm.go:310] 
	I0906 20:09:07.076579   72441 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:07.076594   72441 kubeadm.go:310] 
	I0906 20:09:07.076687   72441 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:07.076698   72441 kubeadm.go:310] 
	I0906 20:09:07.076727   72441 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:07.076810   72441 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:07.076893   72441 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:07.076900   72441 kubeadm.go:310] 
	I0906 20:09:07.077016   72441 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:07.077029   72441 kubeadm.go:310] 
	I0906 20:09:07.077090   72441 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:07.077105   72441 kubeadm.go:310] 
	I0906 20:09:07.077172   72441 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:07.077273   72441 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:07.077368   72441 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:07.077377   72441 kubeadm.go:310] 
	I0906 20:09:07.077496   72441 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:07.077589   72441 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:07.077600   72441 kubeadm.go:310] 
	I0906 20:09:07.077680   72441 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.077767   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:07.077807   72441 kubeadm.go:310] 	--control-plane 
	I0906 20:09:07.077817   72441 kubeadm.go:310] 
	I0906 20:09:07.077927   72441 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:07.077946   72441 kubeadm.go:310] 
	I0906 20:09:07.078053   72441 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3t2lf6.w44vkc4kfppuo2gp \
	I0906 20:09:07.078191   72441 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:07.078206   72441 cni.go:84] Creating CNI manager for ""
	I0906 20:09:07.078216   72441 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:07.079782   72441 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:07.080965   72441 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:07.092500   72441 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:07.112546   72441 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:07.112618   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:07.112648   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-458066 minikube.k8s.io/updated_at=2024_09_06T20_09_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=embed-certs-458066 minikube.k8s.io/primary=true
	I0906 20:09:07.343125   72441 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:07.343284   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:06.408933   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:06.409043   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:06.409126   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:06.409242   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:06.409351   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:06.409445   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:06.409559   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:06.409666   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:06.409758   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:06.409870   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:06.409964   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:06.410010   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:06.410101   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:06.721268   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:06.888472   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:07.414908   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:07.505887   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:07.525704   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:07.525835   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:07.525913   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:07.699971   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:04.692422   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.193312   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:07.701970   73230 out.go:235]   - Booting up control plane ...
	I0906 20:09:07.702095   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:07.708470   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:07.710216   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:07.711016   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:07.714706   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:09:07.844097   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.344174   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:08.843884   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.343591   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:09.843748   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.344148   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:10.844002   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.343424   72441 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:11.444023   72441 kubeadm.go:1113] duration metric: took 4.331471016s to wait for elevateKubeSystemPrivileges
	I0906 20:09:11.444067   72441 kubeadm.go:394] duration metric: took 4m58.815096997s to StartCluster
	I0906 20:09:11.444093   72441 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.444186   72441 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:11.446093   72441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:11.446360   72441 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:11.446430   72441 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:11.446521   72441 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-458066"
	I0906 20:09:11.446542   72441 addons.go:69] Setting default-storageclass=true in profile "embed-certs-458066"
	I0906 20:09:11.446560   72441 addons.go:69] Setting metrics-server=true in profile "embed-certs-458066"
	I0906 20:09:11.446609   72441 config.go:182] Loaded profile config "embed-certs-458066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:11.446615   72441 addons.go:234] Setting addon metrics-server=true in "embed-certs-458066"
	W0906 20:09:11.446663   72441 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:11.446694   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.446576   72441 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-458066"
	I0906 20:09:11.446570   72441 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-458066"
	W0906 20:09:11.446779   72441 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:11.446810   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.447077   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447112   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447170   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447211   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447350   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.447426   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.447879   72441 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:11.449461   72441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:11.463673   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44603
	I0906 20:09:11.463676   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0906 20:09:11.464129   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464231   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.464669   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464691   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.464675   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.464745   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.465097   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465139   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.465608   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465634   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.465731   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.465778   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.466622   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0906 20:09:11.466967   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.467351   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.467366   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.467622   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.467759   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.471093   72441 addons.go:234] Setting addon default-storageclass=true in "embed-certs-458066"
	W0906 20:09:11.471115   72441 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:11.471145   72441 host.go:66] Checking if "embed-certs-458066" exists ...
	I0906 20:09:11.471524   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.471543   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.488980   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46205
	I0906 20:09:11.489014   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0906 20:09:11.489399   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0906 20:09:11.489465   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489517   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.489908   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.490116   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490134   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490144   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490158   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490411   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.490427   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.490481   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490872   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.490886   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.491406   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.491500   72441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:11.491520   72441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:11.491619   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.493485   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.493901   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.495272   72441 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:11.495274   72441 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:11.496553   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:11.496575   72441 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:11.496597   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.496647   72441 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.496667   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:11.496684   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.500389   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500395   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500469   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500503   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500723   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.500786   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.500808   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.500952   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501105   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.501145   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501259   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.501305   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.501389   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.501501   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.510188   72441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0906 20:09:11.510617   72441 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:11.511142   72441 main.go:141] libmachine: Using API Version  1
	I0906 20:09:11.511169   72441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:11.511539   72441 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:11.511754   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetState
	I0906 20:09:11.513207   72441 main.go:141] libmachine: (embed-certs-458066) Calling .DriverName
	I0906 20:09:11.513439   72441 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.513455   72441 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:11.513474   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHHostname
	I0906 20:09:11.516791   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517292   72441 main.go:141] libmachine: (embed-certs-458066) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:22:05", ip: ""} in network mk-embed-certs-458066: {Iface:virbr1 ExpiryTime:2024-09-06 21:03:57 +0000 UTC Type:0 Mac:52:54:00:ab:22:05 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:embed-certs-458066 Clientid:01:52:54:00:ab:22:05}
	I0906 20:09:11.517323   72441 main.go:141] libmachine: (embed-certs-458066) DBG | domain embed-certs-458066 has defined IP address 192.168.39.118 and MAC address 52:54:00:ab:22:05 in network mk-embed-certs-458066
	I0906 20:09:11.517563   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHPort
	I0906 20:09:11.517898   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHKeyPath
	I0906 20:09:11.518085   72441 main.go:141] libmachine: (embed-certs-458066) Calling .GetSSHUsername
	I0906 20:09:11.518261   72441 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/embed-certs-458066/id_rsa Username:docker}
	I0906 20:09:11.669057   72441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:11.705086   72441 node_ready.go:35] waiting up to 6m0s for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731651   72441 node_ready.go:49] node "embed-certs-458066" has status "Ready":"True"
	I0906 20:09:11.731679   72441 node_ready.go:38] duration metric: took 26.546983ms for node "embed-certs-458066" to be "Ready" ...
	I0906 20:09:11.731691   72441 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:11.740680   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:11.767740   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:11.767760   72441 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:11.771571   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:11.804408   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:11.804435   72441 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:11.844160   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:11.856217   72441 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:11.856240   72441 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:11.899134   72441 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:13.159543   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.315345353s)
	I0906 20:09:13.159546   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.387931315s)
	I0906 20:09:13.159639   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159660   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159601   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.159711   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.159946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.159985   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.159997   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160008   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160018   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160080   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160095   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160104   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.160115   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.160265   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160289   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.160401   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.160417   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185478   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.185512   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.185914   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.185934   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.185949   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.228561   72441 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.329382232s)
	I0906 20:09:13.228621   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.228636   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228924   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.228978   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.228991   72441 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:13.229001   72441 main.go:141] libmachine: (embed-certs-458066) Calling .Close
	I0906 20:09:13.228946   72441 main.go:141] libmachine: (embed-certs-458066) DBG | Closing plugin on server side
	I0906 20:09:13.229229   72441 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:13.229258   72441 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:13.229270   72441 addons.go:475] Verifying addon metrics-server=true in "embed-certs-458066"
	I0906 20:09:13.230827   72441 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:09.691281   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:11.692514   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:13.231988   72441 addons.go:510] duration metric: took 1.785558897s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:13.750043   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.247314   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.748039   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:16.748064   72441 pod_ready.go:82] duration metric: took 5.007352361s for pod "coredns-6f6b679f8f-br45p" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:16.748073   72441 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:14.192167   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:16.691856   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:18.754580   72441 pod_ready.go:103] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:19.254643   72441 pod_ready.go:93] pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:19.254669   72441 pod_ready.go:82] duration metric: took 2.506589666s for pod "coredns-6f6b679f8f-gtlxq" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:19.254680   72441 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762162   72441 pod_ready.go:93] pod "etcd-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.762188   72441 pod_ready.go:82] duration metric: took 1.507501384s for pod "etcd-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.762202   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770835   72441 pod_ready.go:93] pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.770860   72441 pod_ready.go:82] duration metric: took 8.65029ms for pod "kube-apiserver-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.770872   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779692   72441 pod_ready.go:93] pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.779713   72441 pod_ready.go:82] duration metric: took 8.832607ms for pod "kube-controller-manager-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.779725   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786119   72441 pod_ready.go:93] pod "kube-proxy-rzx2f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.786146   72441 pod_ready.go:82] duration metric: took 6.414063ms for pod "kube-proxy-rzx2f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.786158   72441 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852593   72441 pod_ready.go:93] pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:20.852630   72441 pod_ready.go:82] duration metric: took 66.461213ms for pod "kube-scheduler-embed-certs-458066" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:20.852642   72441 pod_ready.go:39] duration metric: took 9.120937234s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:20.852663   72441 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:20.852729   72441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:20.871881   72441 api_server.go:72] duration metric: took 9.425481233s to wait for apiserver process to appear ...
	I0906 20:09:20.871911   72441 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:20.871927   72441 api_server.go:253] Checking apiserver healthz at https://192.168.39.118:8443/healthz ...
	I0906 20:09:20.876997   72441 api_server.go:279] https://192.168.39.118:8443/healthz returned 200:
	ok
	I0906 20:09:20.878290   72441 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:20.878314   72441 api_server.go:131] duration metric: took 6.396943ms to wait for apiserver health ...
	I0906 20:09:20.878324   72441 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:21.057265   72441 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:21.057303   72441 system_pods.go:61] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.057312   72441 system_pods.go:61] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.057319   72441 system_pods.go:61] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.057326   72441 system_pods.go:61] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.057332   72441 system_pods.go:61] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.057338   72441 system_pods.go:61] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.057345   72441 system_pods.go:61] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.057356   72441 system_pods.go:61] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.057367   72441 system_pods.go:61] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.057381   72441 system_pods.go:74] duration metric: took 179.050809ms to wait for pod list to return data ...
	I0906 20:09:21.057394   72441 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:21.252816   72441 default_sa.go:45] found service account: "default"
	I0906 20:09:21.252842   72441 default_sa.go:55] duration metric: took 195.436403ms for default service account to be created ...
	I0906 20:09:21.252851   72441 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:21.455714   72441 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:21.455742   72441 system_pods.go:89] "coredns-6f6b679f8f-br45p" [de9992e3-3e5f-437d-90e0-b1087dca42e4] Running
	I0906 20:09:21.455748   72441 system_pods.go:89] "coredns-6f6b679f8f-gtlxq" [b806a981-e9dc-46ec-b440-94ea611c8d27] Running
	I0906 20:09:21.455752   72441 system_pods.go:89] "etcd-embed-certs-458066" [b04655c1-dde8-42c6-a068-422fc9266105] Running
	I0906 20:09:21.455755   72441 system_pods.go:89] "kube-apiserver-embed-certs-458066" [6d21102e-a987-4a76-92a5-a0359cb115ef] Running
	I0906 20:09:21.455759   72441 system_pods.go:89] "kube-controller-manager-embed-certs-458066" [3b72efd8-c333-4fce-a0f2-20ee29932165] Running
	I0906 20:09:21.455763   72441 system_pods.go:89] "kube-proxy-rzx2f" [77e52ab6-7d95-4a7a-acfa-66bbc748d1db] Running
	I0906 20:09:21.455766   72441 system_pods.go:89] "kube-scheduler-embed-certs-458066" [1e96bb4b-3eb8-4d50-a840-7fd77fe86191] Running
	I0906 20:09:21.455772   72441 system_pods.go:89] "metrics-server-6867b74b74-74kzz" [5de1ac37-3f32-44f5-a2ba-e0a3173782ae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:21.455776   72441 system_pods.go:89] "storage-provisioner" [51644de2-a533-44ec-8e7e-4842e80a896e] Running
	I0906 20:09:21.455784   72441 system_pods.go:126] duration metric: took 202.909491ms to wait for k8s-apps to be running ...
	I0906 20:09:21.455791   72441 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:21.455832   72441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.474124   72441 system_svc.go:56] duration metric: took 18.325386ms WaitForService to wait for kubelet
	I0906 20:09:21.474150   72441 kubeadm.go:582] duration metric: took 10.027757317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:21.474172   72441 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:21.653674   72441 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:21.653697   72441 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:21.653708   72441 node_conditions.go:105] duration metric: took 179.531797ms to run NodePressure ...
	I0906 20:09:21.653718   72441 start.go:241] waiting for startup goroutines ...
	I0906 20:09:21.653727   72441 start.go:246] waiting for cluster config update ...
	I0906 20:09:21.653740   72441 start.go:255] writing updated cluster config ...
	I0906 20:09:21.654014   72441 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:21.703909   72441 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:21.705502   72441 out.go:177] * Done! kubectl is now configured to use "embed-certs-458066" cluster and "default" namespace by default
	I0906 20:09:21.102986   72867 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.269383553s)
	I0906 20:09:21.103094   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:21.118935   72867 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:09:21.129099   72867 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:09:21.139304   72867 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:09:21.139326   72867 kubeadm.go:157] found existing configuration files:
	
	I0906 20:09:21.139374   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0906 20:09:21.149234   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:09:21.149289   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:09:21.160067   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0906 20:09:21.169584   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:09:21.169664   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:09:21.179885   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.190994   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:09:21.191062   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:09:21.201649   72867 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0906 20:09:21.211165   72867 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:09:21.211223   72867 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:09:21.220998   72867 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:09:21.269780   72867 kubeadm.go:310] W0906 20:09:21.240800    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.270353   72867 kubeadm.go:310] W0906 20:09:21.241533    2522 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:09:21.389445   72867 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:09:18.692475   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:21.193075   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:23.697031   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:26.191208   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:28.192166   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:30.493468   72867 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:09:30.493543   72867 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:09:30.493620   72867 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:09:30.493751   72867 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:09:30.493891   72867 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:09:30.493971   72867 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:09:30.495375   72867 out.go:235]   - Generating certificates and keys ...
	I0906 20:09:30.495467   72867 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:09:30.495537   72867 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:09:30.495828   72867 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:09:30.495913   72867 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:09:30.495977   72867 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:09:30.496024   72867 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:09:30.496112   72867 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:09:30.496207   72867 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:09:30.496308   72867 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:09:30.496400   72867 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:09:30.496452   72867 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:09:30.496519   72867 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:09:30.496601   72867 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:09:30.496690   72867 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:09:30.496774   72867 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:09:30.496887   72867 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:09:30.496946   72867 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:09:30.497018   72867 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:09:30.497074   72867 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:09:30.498387   72867 out.go:235]   - Booting up control plane ...
	I0906 20:09:30.498472   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:09:30.498550   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:09:30.498616   72867 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:09:30.498715   72867 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:09:30.498786   72867 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:09:30.498821   72867 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:09:30.498969   72867 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:09:30.499076   72867 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:09:30.499126   72867 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.325552ms
	I0906 20:09:30.499189   72867 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:09:30.499269   72867 kubeadm.go:310] [api-check] The API server is healthy after 5.002261512s
	I0906 20:09:30.499393   72867 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:09:30.499507   72867 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:09:30.499586   72867 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:09:30.499818   72867 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-653828 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:09:30.499915   72867 kubeadm.go:310] [bootstrap-token] Using token: 6yha4r.f9kcjkhkq2u0pp1e
	I0906 20:09:30.501217   72867 out.go:235]   - Configuring RBAC rules ...
	I0906 20:09:30.501333   72867 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:09:30.501438   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:09:30.501630   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:09:30.501749   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:09:30.501837   72867 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:09:30.501904   72867 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:09:30.501996   72867 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:09:30.502032   72867 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:09:30.502085   72867 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:09:30.502093   72867 kubeadm.go:310] 
	I0906 20:09:30.502153   72867 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:09:30.502166   72867 kubeadm.go:310] 
	I0906 20:09:30.502242   72867 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:09:30.502257   72867 kubeadm.go:310] 
	I0906 20:09:30.502290   72867 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:09:30.502358   72867 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:09:30.502425   72867 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:09:30.502433   72867 kubeadm.go:310] 
	I0906 20:09:30.502486   72867 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:09:30.502494   72867 kubeadm.go:310] 
	I0906 20:09:30.502529   72867 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:09:30.502536   72867 kubeadm.go:310] 
	I0906 20:09:30.502575   72867 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:09:30.502633   72867 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:09:30.502706   72867 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:09:30.502720   72867 kubeadm.go:310] 
	I0906 20:09:30.502791   72867 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:09:30.502882   72867 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:09:30.502893   72867 kubeadm.go:310] 
	I0906 20:09:30.502982   72867 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503099   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:09:30.503120   72867 kubeadm.go:310] 	--control-plane 
	I0906 20:09:30.503125   72867 kubeadm.go:310] 
	I0906 20:09:30.503240   72867 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:09:30.503247   72867 kubeadm.go:310] 
	I0906 20:09:30.503312   72867 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 6yha4r.f9kcjkhkq2u0pp1e \
	I0906 20:09:30.503406   72867 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:09:30.503416   72867 cni.go:84] Creating CNI manager for ""
	I0906 20:09:30.503424   72867 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:09:30.504880   72867 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:09:30.505997   72867 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:09:30.517864   72867 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:09:30.539641   72867 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:09:30.539731   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653828 minikube.k8s.io/updated_at=2024_09_06T20_09_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=default-k8s-diff-port-653828 minikube.k8s.io/primary=true
	I0906 20:09:30.539732   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.576812   72867 ops.go:34] apiserver oom_adj: -16
	I0906 20:09:30.742163   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.242299   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:31.742502   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:30.192201   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.691488   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:32.242418   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:32.742424   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.242317   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:33.742587   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.242563   72867 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:09:34.342481   72867 kubeadm.go:1113] duration metric: took 3.802829263s to wait for elevateKubeSystemPrivileges
	I0906 20:09:34.342520   72867 kubeadm.go:394] duration metric: took 5m1.826839653s to StartCluster
	I0906 20:09:34.342542   72867 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.342640   72867 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:09:34.345048   72867 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:09:34.345461   72867 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.16 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:09:34.345576   72867 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:09:34.345655   72867 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345691   72867 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653828"
	I0906 20:09:34.345696   72867 config.go:182] Loaded profile config "default-k8s-diff-port-653828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:09:34.345699   72867 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345712   72867 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653828"
	I0906 20:09:34.345737   72867 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653828"
	W0906 20:09:34.345703   72867 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:09:34.345752   72867 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.345762   72867 addons.go:243] addon metrics-server should already be in state true
	I0906 20:09:34.345779   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.345795   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.346102   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346136   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346174   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346195   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.346231   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.346201   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.347895   72867 out.go:177] * Verifying Kubernetes components...
	I0906 20:09:34.349535   72867 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:09:34.363021   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
	I0906 20:09:34.363492   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.364037   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.364062   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.364463   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.365147   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.365186   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.365991   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36811
	I0906 20:09:34.366024   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I0906 20:09:34.366472   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366512   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.366953   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.366970   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367086   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.367113   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.367494   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367642   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.367988   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.368011   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.368282   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.375406   72867 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653828"
	W0906 20:09:34.375432   72867 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:09:34.375460   72867 host.go:66] Checking if "default-k8s-diff-port-653828" exists ...
	I0906 20:09:34.375825   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.375858   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.382554   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41397
	I0906 20:09:34.383102   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.383600   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.383616   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.383938   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.384214   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.385829   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.387409   72867 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:09:34.388348   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:09:34.388366   72867 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:09:34.388381   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.392542   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.392813   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.392828   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.393018   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.393068   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0906 20:09:34.393374   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.393439   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.393550   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.393686   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.394089   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.394116   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.394464   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.394651   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.396559   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.396712   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I0906 20:09:34.397142   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.397646   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.397669   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.397929   72867 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:09:34.398023   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.398468   72867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:09:34.398511   72867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:09:34.399007   72867 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.399024   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:09:34.399043   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.405024   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405057   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.405081   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.405287   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.405479   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.405634   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.405752   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.414779   72867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0906 20:09:34.415230   72867 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:09:34.415662   72867 main.go:141] libmachine: Using API Version  1
	I0906 20:09:34.415679   72867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:09:34.415993   72867 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:09:34.416151   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetState
	I0906 20:09:34.417818   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .DriverName
	I0906 20:09:34.418015   72867 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.418028   72867 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:09:34.418045   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHHostname
	I0906 20:09:34.421303   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421379   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:b1:87", ip: ""} in network mk-default-k8s-diff-port-653828: {Iface:virbr2 ExpiryTime:2024-09-06 21:04:18 +0000 UTC Type:0 Mac:52:54:00:0a:b1:87 Iaid: IPaddr:192.168.50.16 Prefix:24 Hostname:default-k8s-diff-port-653828 Clientid:01:52:54:00:0a:b1:87}
	I0906 20:09:34.421399   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | domain default-k8s-diff-port-653828 has defined IP address 192.168.50.16 and MAC address 52:54:00:0a:b1:87 in network mk-default-k8s-diff-port-653828
	I0906 20:09:34.421645   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHPort
	I0906 20:09:34.421815   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHKeyPath
	I0906 20:09:34.421979   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .GetSSHUsername
	I0906 20:09:34.422096   72867 sshutil.go:53] new ssh client: &{IP:192.168.50.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/default-k8s-diff-port-653828/id_rsa Username:docker}
	I0906 20:09:34.582923   72867 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:09:34.600692   72867 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617429   72867 node_ready.go:49] node "default-k8s-diff-port-653828" has status "Ready":"True"
	I0906 20:09:34.617454   72867 node_ready.go:38] duration metric: took 16.723446ms for node "default-k8s-diff-port-653828" to be "Ready" ...
	I0906 20:09:34.617465   72867 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:34.632501   72867 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:34.679561   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:09:34.682999   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:09:34.746380   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:09:34.746406   72867 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:09:34.876650   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:09:34.876680   72867 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:09:34.935388   72867 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:34.935415   72867 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:09:35.092289   72867 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:09:35.709257   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02965114s)
	I0906 20:09:35.709297   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026263795s)
	I0906 20:09:35.709352   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709373   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709319   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709398   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709810   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.709911   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709898   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.709926   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.709954   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.709962   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.709876   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710029   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710047   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.710065   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.710226   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710238   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.710636   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:35.710665   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.710681   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754431   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:35.754458   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:35.754765   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:35.754781   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:35.754821   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.181191   72867 pod_ready.go:93] pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:36.181219   72867 pod_ready.go:82] duration metric: took 1.54868366s for pod "etcd-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.181233   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:36.351617   72867 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.259284594s)
	I0906 20:09:36.351684   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.351701   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.351992   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352078   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352100   72867 main.go:141] libmachine: Making call to close driver server
	I0906 20:09:36.352111   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) Calling .Close
	I0906 20:09:36.352055   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352402   72867 main.go:141] libmachine: (default-k8s-diff-port-653828) DBG | Closing plugin on server side
	I0906 20:09:36.352914   72867 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:09:36.352934   72867 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:09:36.352945   72867 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-653828"
	I0906 20:09:36.354972   72867 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0906 20:09:36.356127   72867 addons.go:510] duration metric: took 2.010554769s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0906 20:09:34.695700   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:37.193366   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:38.187115   72867 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:39.188966   72867 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:39.188998   72867 pod_ready.go:82] duration metric: took 3.007757042s for pod "kube-apiserver-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:39.189012   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:41.196228   72867 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.206614   72867 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.206636   72867 pod_ready.go:82] duration metric: took 3.017616218s for pod "kube-controller-manager-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.206647   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212140   72867 pod_ready.go:93] pod "kube-proxy-7846f" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.212165   72867 pod_ready.go:82] duration metric: took 5.512697ms for pod "kube-proxy-7846f" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.212174   72867 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217505   72867 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace has status "Ready":"True"
	I0906 20:09:42.217527   72867 pod_ready.go:82] duration metric: took 5.346748ms for pod "kube-scheduler-default-k8s-diff-port-653828" in "kube-system" namespace to be "Ready" ...
	I0906 20:09:42.217534   72867 pod_ready.go:39] duration metric: took 7.600058293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:42.217549   72867 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:09:42.217600   72867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:09:42.235961   72867 api_server.go:72] duration metric: took 7.890460166s to wait for apiserver process to appear ...
	I0906 20:09:42.235987   72867 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:09:42.236003   72867 api_server.go:253] Checking apiserver healthz at https://192.168.50.16:8444/healthz ...
	I0906 20:09:42.240924   72867 api_server.go:279] https://192.168.50.16:8444/healthz returned 200:
	ok
	I0906 20:09:42.241889   72867 api_server.go:141] control plane version: v1.31.0
	I0906 20:09:42.241912   72867 api_server.go:131] duration metric: took 5.919055ms to wait for apiserver health ...
	I0906 20:09:42.241922   72867 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:09:42.247793   72867 system_pods.go:59] 9 kube-system pods found
	I0906 20:09:42.247825   72867 system_pods.go:61] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.247833   72867 system_pods.go:61] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.247839   72867 system_pods.go:61] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.247845   72867 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.247852   72867 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.247857   72867 system_pods.go:61] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.247861   72867 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.247866   72867 system_pods.go:61] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.247873   72867 system_pods.go:61] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.247883   72867 system_pods.go:74] duration metric: took 5.95413ms to wait for pod list to return data ...
	I0906 20:09:42.247893   72867 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:09:42.251260   72867 default_sa.go:45] found service account: "default"
	I0906 20:09:42.251277   72867 default_sa.go:55] duration metric: took 3.3795ms for default service account to be created ...
	I0906 20:09:42.251284   72867 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:09:42.256204   72867 system_pods.go:86] 9 kube-system pods found
	I0906 20:09:42.256228   72867 system_pods.go:89] "coredns-6f6b679f8f-h9hv9" [bf6ec352-3abf-4738-8f19-8a70916e98a9] Running
	I0906 20:09:42.256233   72867 system_pods.go:89] "coredns-6f6b679f8f-v4r9m" [84854d53-cb74-42c8-bb74-92536fcd300d] Running
	I0906 20:09:42.256237   72867 system_pods.go:89] "etcd-default-k8s-diff-port-653828" [1694e103-0bb0-49eb-b9b1-c5e8dda465d7] Running
	I0906 20:09:42.256241   72867 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653828" [3243d1b2-d2a1-475f-971b-2f83f0f65bca] Running
	I0906 20:09:42.256245   72867 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653828" [208af0a8-8485-495a-9124-ce0a82d3ca20] Running
	I0906 20:09:42.256249   72867 system_pods.go:89] "kube-proxy-7846f" [30e0658b-592e-4d52-b431-f1227e742e5a] Running
	I0906 20:09:42.256252   72867 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653828" [106bc4c8-4313-44d0-bdfb-dbb866c6deed] Running
	I0906 20:09:42.256258   72867 system_pods.go:89] "metrics-server-6867b74b74-nwk7f" [6ed9e2aa-6997-4a33-a25f-e7f1c4dfdcbe] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:09:42.256261   72867 system_pods.go:89] "storage-provisioner" [c2a4afa2-1018-41f6-aecf-1b6300f520a3] Running
	I0906 20:09:42.256270   72867 system_pods.go:126] duration metric: took 4.981383ms to wait for k8s-apps to be running ...
	I0906 20:09:42.256278   72867 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:09:42.256323   72867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:09:42.272016   72867 system_svc.go:56] duration metric: took 15.727796ms WaitForService to wait for kubelet
	I0906 20:09:42.272050   72867 kubeadm.go:582] duration metric: took 7.926551396s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:09:42.272081   72867 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:09:42.275486   72867 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:09:42.275516   72867 node_conditions.go:123] node cpu capacity is 2
	I0906 20:09:42.275527   72867 node_conditions.go:105] duration metric: took 3.439966ms to run NodePressure ...
	I0906 20:09:42.275540   72867 start.go:241] waiting for startup goroutines ...
	I0906 20:09:42.275548   72867 start.go:246] waiting for cluster config update ...
	I0906 20:09:42.275561   72867 start.go:255] writing updated cluster config ...
	I0906 20:09:42.275823   72867 ssh_runner.go:195] Run: rm -f paused
	I0906 20:09:42.326049   72867 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:09:42.328034   72867 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653828" cluster and "default" namespace by default
	I0906 20:09:39.692393   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:42.192176   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:44.691934   72322 pod_ready.go:103] pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace has status "Ready":"False"
	I0906 20:09:45.185317   72322 pod_ready.go:82] duration metric: took 4m0.000138495s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" ...
	E0906 20:09:45.185352   72322 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-nn295" in "kube-system" namespace to be "Ready" (will not retry!)
	I0906 20:09:45.185371   72322 pod_ready.go:39] duration metric: took 4m12.222584677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:09:45.185403   72322 kubeadm.go:597] duration metric: took 4m20.152442555s to restartPrimaryControlPlane
	W0906 20:09:45.185466   72322 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0906 20:09:45.185496   72322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:09:47.714239   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:09:47.714464   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:47.714711   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:09:52.715187   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:09:52.715391   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:02.716155   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:02.716424   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:11.446625   72322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.261097398s)
	I0906 20:10:11.446717   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:11.472899   72322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 20:10:11.492643   72322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:10:11.509855   72322 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:10:11.509878   72322 kubeadm.go:157] found existing configuration files:
	
	I0906 20:10:11.509933   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:10:11.523039   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:10:11.523099   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:10:11.540484   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:10:11.560246   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:10:11.560323   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:10:11.585105   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.596067   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:10:11.596138   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:10:11.607049   72322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:10:11.616982   72322 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:10:11.617058   72322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:10:11.627880   72322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:10:11.672079   72322 kubeadm.go:310] W0906 20:10:11.645236    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.672935   72322 kubeadm.go:310] W0906 20:10:11.646151    3038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 20:10:11.789722   72322 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:10:20.270339   72322 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 20:10:20.270450   72322 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:10:20.270551   72322 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:10:20.270697   72322 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:10:20.270837   72322 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 20:10:20.270932   72322 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:10:20.272324   72322 out.go:235]   - Generating certificates and keys ...
	I0906 20:10:20.272437   72322 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:10:20.272530   72322 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:10:20.272634   72322 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:10:20.272732   72322 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:10:20.272842   72322 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:10:20.272950   72322 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:10:20.273051   72322 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:10:20.273135   72322 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:10:20.273272   72322 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:10:20.273361   72322 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:10:20.273400   72322 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:10:20.273456   72322 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:10:20.273517   72322 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:10:20.273571   72322 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 20:10:20.273625   72322 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:10:20.273682   72322 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:10:20.273731   72322 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:10:20.273801   72322 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:10:20.273856   72322 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:10:20.275359   72322 out.go:235]   - Booting up control plane ...
	I0906 20:10:20.275466   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:10:20.275539   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:10:20.275595   72322 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:10:20.275692   72322 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:10:20.275774   72322 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:10:20.275812   72322 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:10:20.275917   72322 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 20:10:20.276005   72322 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 20:10:20.276063   72322 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001365031s
	I0906 20:10:20.276127   72322 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 20:10:20.276189   72322 kubeadm.go:310] [api-check] The API server is healthy after 5.002810387s
	I0906 20:10:20.276275   72322 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 20:10:20.276410   72322 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 20:10:20.276480   72322 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 20:10:20.276639   72322 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-504385 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 20:10:20.276690   72322 kubeadm.go:310] [bootstrap-token] Using token: fv12w2.cc6vcthx5yn6r6ru
	I0906 20:10:20.277786   72322 out.go:235]   - Configuring RBAC rules ...
	I0906 20:10:20.277872   72322 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 20:10:20.277941   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 20:10:20.278082   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 20:10:20.278231   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 20:10:20.278351   72322 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 20:10:20.278426   72322 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 20:10:20.278541   72322 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 20:10:20.278614   72322 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 20:10:20.278692   72322 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 20:10:20.278700   72322 kubeadm.go:310] 
	I0906 20:10:20.278780   72322 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 20:10:20.278790   72322 kubeadm.go:310] 
	I0906 20:10:20.278880   72322 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 20:10:20.278889   72322 kubeadm.go:310] 
	I0906 20:10:20.278932   72322 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 20:10:20.279023   72322 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 20:10:20.279079   72322 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 20:10:20.279086   72322 kubeadm.go:310] 
	I0906 20:10:20.279141   72322 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 20:10:20.279148   72322 kubeadm.go:310] 
	I0906 20:10:20.279186   72322 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 20:10:20.279195   72322 kubeadm.go:310] 
	I0906 20:10:20.279291   72322 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 20:10:20.279420   72322 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 20:10:20.279524   72322 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 20:10:20.279535   72322 kubeadm.go:310] 
	I0906 20:10:20.279647   72322 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 20:10:20.279756   72322 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 20:10:20.279767   72322 kubeadm.go:310] 
	I0906 20:10:20.279896   72322 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280043   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 \
	I0906 20:10:20.280080   72322 kubeadm.go:310] 	--control-plane 
	I0906 20:10:20.280090   72322 kubeadm.go:310] 
	I0906 20:10:20.280230   72322 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 20:10:20.280258   72322 kubeadm.go:310] 
	I0906 20:10:20.280365   72322 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fv12w2.cc6vcthx5yn6r6ru \
	I0906 20:10:20.280514   72322 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ce2b2c80475093ce3b8f3f84488ab9d84b6682b0b811baa96a811939d5053d80 
	I0906 20:10:20.280532   72322 cni.go:84] Creating CNI manager for ""
	I0906 20:10:20.280541   72322 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 20:10:20.282066   72322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 20:10:20.283228   72322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 20:10:20.294745   72322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 20:10:20.317015   72322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:20.317137   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-504385 minikube.k8s.io/updated_at=2024_09_06T20_10_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=no-preload-504385 minikube.k8s.io/primary=true
	I0906 20:10:20.528654   72322 ops.go:34] apiserver oom_adj: -16
	I0906 20:10:20.528681   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.029394   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:21.528922   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.029667   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:22.528814   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.029163   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:23.529709   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.029277   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.529466   72322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 20:10:24.668636   72322 kubeadm.go:1113] duration metric: took 4.351557657s to wait for elevateKubeSystemPrivileges
	I0906 20:10:24.668669   72322 kubeadm.go:394] duration metric: took 4m59.692142044s to StartCluster
	I0906 20:10:24.668690   72322 settings.go:142] acquiring lock: {Name:mk8fffa52684b28168283cc3a564987eee23d260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.668775   72322 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 20:10:24.670483   72322 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-6021/kubeconfig: {Name:mk2abf259be9bf4e88153026345fc2a1fe218409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 20:10:24.670765   72322 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.184 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0906 20:10:24.670874   72322 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0906 20:10:24.670975   72322 addons.go:69] Setting storage-provisioner=true in profile "no-preload-504385"
	I0906 20:10:24.670990   72322 config.go:182] Loaded profile config "no-preload-504385": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 20:10:24.671015   72322 addons.go:234] Setting addon storage-provisioner=true in "no-preload-504385"
	W0906 20:10:24.671027   72322 addons.go:243] addon storage-provisioner should already be in state true
	I0906 20:10:24.670988   72322 addons.go:69] Setting default-storageclass=true in profile "no-preload-504385"
	I0906 20:10:24.671020   72322 addons.go:69] Setting metrics-server=true in profile "no-preload-504385"
	I0906 20:10:24.671053   72322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-504385"
	I0906 20:10:24.671069   72322 addons.go:234] Setting addon metrics-server=true in "no-preload-504385"
	I0906 20:10:24.671057   72322 host.go:66] Checking if "no-preload-504385" exists ...
	W0906 20:10:24.671080   72322 addons.go:243] addon metrics-server should already be in state true
	I0906 20:10:24.671112   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.671387   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671413   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671433   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671462   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.671476   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.671509   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.672599   72322 out.go:177] * Verifying Kubernetes components...
	I0906 20:10:24.674189   72322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 20:10:24.688494   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0906 20:10:24.689082   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.689564   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.689586   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.690020   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.690242   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.691753   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0906 20:10:24.691758   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0906 20:10:24.692223   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692314   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.692744   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692761   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.692892   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.692912   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.693162   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693498   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.693821   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.693851   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694035   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694067   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.694118   72322 addons.go:234] Setting addon default-storageclass=true in "no-preload-504385"
	W0906 20:10:24.694133   72322 addons.go:243] addon default-storageclass should already be in state true
	I0906 20:10:24.694159   72322 host.go:66] Checking if "no-preload-504385" exists ...
	I0906 20:10:24.694503   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.694533   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.710695   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0906 20:10:24.712123   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.712820   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.712844   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.713265   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.713488   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.714238   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42981
	I0906 20:10:24.714448   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0906 20:10:24.714584   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.714801   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.715454   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715472   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715517   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.715631   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.715643   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.715961   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716468   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.716527   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.717120   72322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 20:10:24.717170   72322 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 20:10:24.717534   72322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 20:10:24.718838   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.719392   72322 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:24.719413   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 20:10:24.719435   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.720748   72322 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0906 20:10:22.717567   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:10:22.717827   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:10:24.722045   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 20:10:24.722066   72322 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 20:10:24.722084   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.722722   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723383   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.723408   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.723545   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.723788   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.723970   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.724133   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.725538   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.725987   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.726006   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.726137   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.726317   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.726499   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.726629   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.734236   72322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
	I0906 20:10:24.734597   72322 main.go:141] libmachine: () Calling .GetVersion
	I0906 20:10:24.735057   72322 main.go:141] libmachine: Using API Version  1
	I0906 20:10:24.735069   72322 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 20:10:24.735479   72322 main.go:141] libmachine: () Calling .GetMachineName
	I0906 20:10:24.735612   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetState
	I0906 20:10:24.737446   72322 main.go:141] libmachine: (no-preload-504385) Calling .DriverName
	I0906 20:10:24.737630   72322 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:24.737647   72322 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 20:10:24.737658   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHHostname
	I0906 20:10:24.740629   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741040   72322 main.go:141] libmachine: (no-preload-504385) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:57:e7", ip: ""} in network mk-no-preload-504385: {Iface:virbr3 ExpiryTime:2024-09-06 21:05:00 +0000 UTC Type:0 Mac:52:54:00:4c:57:e7 Iaid: IPaddr:192.168.61.184 Prefix:24 Hostname:no-preload-504385 Clientid:01:52:54:00:4c:57:e7}
	I0906 20:10:24.741063   72322 main.go:141] libmachine: (no-preload-504385) DBG | domain no-preload-504385 has defined IP address 192.168.61.184 and MAC address 52:54:00:4c:57:e7 in network mk-no-preload-504385
	I0906 20:10:24.741251   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHPort
	I0906 20:10:24.741418   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHKeyPath
	I0906 20:10:24.741530   72322 main.go:141] libmachine: (no-preload-504385) Calling .GetSSHUsername
	I0906 20:10:24.741659   72322 sshutil.go:53] new ssh client: &{IP:192.168.61.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/no-preload-504385/id_rsa Username:docker}
	I0906 20:10:24.903190   72322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 20:10:24.944044   72322 node_ready.go:35] waiting up to 6m0s for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960395   72322 node_ready.go:49] node "no-preload-504385" has status "Ready":"True"
	I0906 20:10:24.960436   72322 node_ready.go:38] duration metric: took 16.357022ms for node "no-preload-504385" to be "Ready" ...
	I0906 20:10:24.960453   72322 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:24.981153   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:25.103072   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 20:10:25.113814   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 20:10:25.113843   72322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0906 20:10:25.123206   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 20:10:25.209178   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 20:10:25.209208   72322 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 20:10:25.255577   72322 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.255604   72322 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 20:10:25.297179   72322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 20:10:25.336592   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336615   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.336915   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.336930   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.336938   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.336945   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.337164   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.337178   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.350330   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.350356   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.350630   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.350648   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850349   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850377   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850688   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.850707   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:25.850717   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:25.850725   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:25.850974   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:25.851012   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.033886   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.033918   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034215   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034221   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034241   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034250   72322 main.go:141] libmachine: Making call to close driver server
	I0906 20:10:26.034258   72322 main.go:141] libmachine: (no-preload-504385) Calling .Close
	I0906 20:10:26.034525   72322 main.go:141] libmachine: (no-preload-504385) DBG | Closing plugin on server side
	I0906 20:10:26.034533   72322 main.go:141] libmachine: Successfully made call to close driver server
	I0906 20:10:26.034579   72322 main.go:141] libmachine: Making call to close connection to plugin binary
	I0906 20:10:26.034593   72322 addons.go:475] Verifying addon metrics-server=true in "no-preload-504385"
	I0906 20:10:26.036358   72322 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0906 20:10:26.037927   72322 addons.go:510] duration metric: took 1.367055829s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0906 20:10:26.989945   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:28.987386   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:28.987407   72322 pod_ready.go:82] duration metric: took 4.006228588s for pod "coredns-6f6b679f8f-ffnb7" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:28.987419   72322 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:30.994020   72322 pod_ready.go:103] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"False"
	I0906 20:10:32.999308   72322 pod_ready.go:93] pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:32.999332   72322 pod_ready.go:82] duration metric: took 4.01190401s for pod "coredns-6f6b679f8f-lwxzl" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:32.999344   72322 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005872   72322 pod_ready.go:93] pod "etcd-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.005898   72322 pod_ready.go:82] duration metric: took 1.006546878s for pod "etcd-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.005908   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010279   72322 pod_ready.go:93] pod "kube-apiserver-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.010306   72322 pod_ready.go:82] duration metric: took 4.391154ms for pod "kube-apiserver-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.010315   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014331   72322 pod_ready.go:93] pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.014346   72322 pod_ready.go:82] duration metric: took 4.025331ms for pod "kube-controller-manager-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.014354   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018361   72322 pod_ready.go:93] pod "kube-proxy-48s2x" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.018378   72322 pod_ready.go:82] duration metric: took 4.018525ms for pod "kube-proxy-48s2x" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.018386   72322 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191606   72322 pod_ready.go:93] pod "kube-scheduler-no-preload-504385" in "kube-system" namespace has status "Ready":"True"
	I0906 20:10:34.191630   72322 pod_ready.go:82] duration metric: took 173.23777ms for pod "kube-scheduler-no-preload-504385" in "kube-system" namespace to be "Ready" ...
	I0906 20:10:34.191638   72322 pod_ready.go:39] duration metric: took 9.231173272s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 20:10:34.191652   72322 api_server.go:52] waiting for apiserver process to appear ...
	I0906 20:10:34.191738   72322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 20:10:34.207858   72322 api_server.go:72] duration metric: took 9.537052258s to wait for apiserver process to appear ...
	I0906 20:10:34.207883   72322 api_server.go:88] waiting for apiserver healthz status ...
	I0906 20:10:34.207904   72322 api_server.go:253] Checking apiserver healthz at https://192.168.61.184:8443/healthz ...
	I0906 20:10:34.214477   72322 api_server.go:279] https://192.168.61.184:8443/healthz returned 200:
	ok
	I0906 20:10:34.216178   72322 api_server.go:141] control plane version: v1.31.0
	I0906 20:10:34.216211   72322 api_server.go:131] duration metric: took 8.319856ms to wait for apiserver health ...
	I0906 20:10:34.216221   72322 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 20:10:34.396409   72322 system_pods.go:59] 9 kube-system pods found
	I0906 20:10:34.396443   72322 system_pods.go:61] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.396451   72322 system_pods.go:61] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.396456   72322 system_pods.go:61] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.396461   72322 system_pods.go:61] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.396468   72322 system_pods.go:61] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.396472   72322 system_pods.go:61] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.396477   72322 system_pods.go:61] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.396487   72322 system_pods.go:61] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.396502   72322 system_pods.go:61] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.396514   72322 system_pods.go:74] duration metric: took 180.284785ms to wait for pod list to return data ...
	I0906 20:10:34.396526   72322 default_sa.go:34] waiting for default service account to be created ...
	I0906 20:10:34.592160   72322 default_sa.go:45] found service account: "default"
	I0906 20:10:34.592186   72322 default_sa.go:55] duration metric: took 195.651674ms for default service account to be created ...
	I0906 20:10:34.592197   72322 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 20:10:34.795179   72322 system_pods.go:86] 9 kube-system pods found
	I0906 20:10:34.795210   72322 system_pods.go:89] "coredns-6f6b679f8f-ffnb7" [59184ee8-fe9e-479d-b298-0ee9818e4a00] Running
	I0906 20:10:34.795217   72322 system_pods.go:89] "coredns-6f6b679f8f-lwxzl" [e2df0b29-0770-447f-8051-fce39e9acff0] Running
	I0906 20:10:34.795221   72322 system_pods.go:89] "etcd-no-preload-504385" [1d9d27eb-82f2-45aa-911c-f1e4562e5093] Running
	I0906 20:10:34.795224   72322 system_pods.go:89] "kube-apiserver-no-preload-504385" [bbbf0ec9-9056-4019-aef3-abbbe6eb8fee] Running
	I0906 20:10:34.795228   72322 system_pods.go:89] "kube-controller-manager-no-preload-504385" [d81aa028-ade5-42bf-893d-4968dcdf0519] Running
	I0906 20:10:34.795232   72322 system_pods.go:89] "kube-proxy-48s2x" [dd175211-d965-4b1a-a37a-d1e6df47f09b] Running
	I0906 20:10:34.795238   72322 system_pods.go:89] "kube-scheduler-no-preload-504385" [743fd56a-9190-4d94-8ff8-d95332e2c84a] Running
	I0906 20:10:34.795244   72322 system_pods.go:89] "metrics-server-6867b74b74-56mkl" [73747864-24bf-42d0-956b-6047a52ed887] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 20:10:34.795249   72322 system_pods.go:89] "storage-provisioner" [db548eab-0f9d-4e22-a5ba-0ed7c2a8ff11] Running
	I0906 20:10:34.795258   72322 system_pods.go:126] duration metric: took 203.05524ms to wait for k8s-apps to be running ...
	I0906 20:10:34.795270   72322 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 20:10:34.795328   72322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:10:34.810406   72322 system_svc.go:56] duration metric: took 15.127486ms WaitForService to wait for kubelet
	I0906 20:10:34.810437   72322 kubeadm.go:582] duration metric: took 10.13963577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 20:10:34.810461   72322 node_conditions.go:102] verifying NodePressure condition ...
	I0906 20:10:34.993045   72322 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0906 20:10:34.993077   72322 node_conditions.go:123] node cpu capacity is 2
	I0906 20:10:34.993092   72322 node_conditions.go:105] duration metric: took 182.626456ms to run NodePressure ...
	I0906 20:10:34.993105   72322 start.go:241] waiting for startup goroutines ...
	I0906 20:10:34.993112   72322 start.go:246] waiting for cluster config update ...
	I0906 20:10:34.993122   72322 start.go:255] writing updated cluster config ...
	I0906 20:10:34.993401   72322 ssh_runner.go:195] Run: rm -f paused
	I0906 20:10:35.043039   72322 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 20:10:35.045782   72322 out.go:177] * Done! kubectl is now configured to use "no-preload-504385" cluster and "default" namespace by default
	I0906 20:11:02.719781   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:02.720062   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:02.720077   73230 kubeadm.go:310] 
	I0906 20:11:02.720125   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:11:02.720177   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:11:02.720189   73230 kubeadm.go:310] 
	I0906 20:11:02.720246   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:11:02.720290   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:11:02.720443   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:11:02.720469   73230 kubeadm.go:310] 
	I0906 20:11:02.720593   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:11:02.720665   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:11:02.720722   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:11:02.720746   73230 kubeadm.go:310] 
	I0906 20:11:02.720900   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:11:02.721018   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:11:02.721028   73230 kubeadm.go:310] 
	I0906 20:11:02.721180   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:11:02.721311   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:11:02.721405   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:11:02.721500   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:11:02.721512   73230 kubeadm.go:310] 
	I0906 20:11:02.722088   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:11:02.722199   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:11:02.722310   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0906 20:11:02.722419   73230 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0906 20:11:02.722469   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0906 20:11:03.188091   73230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 20:11:03.204943   73230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 20:11:03.215434   73230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 20:11:03.215458   73230 kubeadm.go:157] found existing configuration files:
	
	I0906 20:11:03.215506   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 20:11:03.225650   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 20:11:03.225713   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 20:11:03.236252   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 20:11:03.245425   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 20:11:03.245489   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 20:11:03.255564   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.264932   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 20:11:03.265014   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 20:11:03.274896   73230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 20:11:03.284027   73230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 20:11:03.284092   73230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 20:11:03.294368   73230 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0906 20:11:03.377411   73230 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0906 20:11:03.377509   73230 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 20:11:03.537331   73230 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 20:11:03.537590   73230 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 20:11:03.537722   73230 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0906 20:11:03.728458   73230 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 20:11:03.730508   73230 out.go:235]   - Generating certificates and keys ...
	I0906 20:11:03.730621   73230 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 20:11:03.730720   73230 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 20:11:03.730869   73230 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0906 20:11:03.730984   73230 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0906 20:11:03.731082   73230 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0906 20:11:03.731167   73230 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0906 20:11:03.731258   73230 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0906 20:11:03.731555   73230 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0906 20:11:03.731896   73230 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0906 20:11:03.732663   73230 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0906 20:11:03.732953   73230 kubeadm.go:310] [certs] Using the existing "sa" key
	I0906 20:11:03.733053   73230 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 20:11:03.839927   73230 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 20:11:03.988848   73230 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 20:11:04.077497   73230 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 20:11:04.213789   73230 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 20:11:04.236317   73230 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 20:11:04.237625   73230 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 20:11:04.237719   73230 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 20:11:04.399036   73230 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 20:11:04.400624   73230 out.go:235]   - Booting up control plane ...
	I0906 20:11:04.400709   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 20:11:04.401417   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 20:11:04.402751   73230 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 20:11:04.404122   73230 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 20:11:04.407817   73230 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0906 20:11:44.410273   73230 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0906 20:11:44.410884   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:44.411132   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:49.411428   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:49.411674   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:11:59.412917   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:11:59.413182   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:19.414487   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:19.414692   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415457   73230 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0906 20:12:59.415729   73230 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0906 20:12:59.415750   73230 kubeadm.go:310] 
	I0906 20:12:59.415808   73230 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0906 20:12:59.415864   73230 kubeadm.go:310] 		timed out waiting for the condition
	I0906 20:12:59.415874   73230 kubeadm.go:310] 
	I0906 20:12:59.415933   73230 kubeadm.go:310] 	This error is likely caused by:
	I0906 20:12:59.415979   73230 kubeadm.go:310] 		- The kubelet is not running
	I0906 20:12:59.416147   73230 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0906 20:12:59.416167   73230 kubeadm.go:310] 
	I0906 20:12:59.416332   73230 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0906 20:12:59.416372   73230 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0906 20:12:59.416420   73230 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0906 20:12:59.416428   73230 kubeadm.go:310] 
	I0906 20:12:59.416542   73230 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0906 20:12:59.416650   73230 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0906 20:12:59.416659   73230 kubeadm.go:310] 
	I0906 20:12:59.416818   73230 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0906 20:12:59.416928   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0906 20:12:59.417030   73230 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0906 20:12:59.417139   73230 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0906 20:12:59.417153   73230 kubeadm.go:310] 
	I0906 20:12:59.417400   73230 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 20:12:59.417485   73230 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0906 20:12:59.417559   73230 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0906 20:12:59.417626   73230 kubeadm.go:394] duration metric: took 8m3.018298427s to StartCluster
	I0906 20:12:59.417673   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0906 20:12:59.417741   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0906 20:12:59.464005   73230 cri.go:89] found id: ""
	I0906 20:12:59.464033   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.464040   73230 logs.go:278] No container was found matching "kube-apiserver"
	I0906 20:12:59.464045   73230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0906 20:12:59.464101   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0906 20:12:59.504218   73230 cri.go:89] found id: ""
	I0906 20:12:59.504252   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.504264   73230 logs.go:278] No container was found matching "etcd"
	I0906 20:12:59.504271   73230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0906 20:12:59.504327   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0906 20:12:59.541552   73230 cri.go:89] found id: ""
	I0906 20:12:59.541579   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.541589   73230 logs.go:278] No container was found matching "coredns"
	I0906 20:12:59.541596   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0906 20:12:59.541663   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0906 20:12:59.580135   73230 cri.go:89] found id: ""
	I0906 20:12:59.580158   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.580168   73230 logs.go:278] No container was found matching "kube-scheduler"
	I0906 20:12:59.580174   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0906 20:12:59.580220   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0906 20:12:59.622453   73230 cri.go:89] found id: ""
	I0906 20:12:59.622486   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.622498   73230 logs.go:278] No container was found matching "kube-proxy"
	I0906 20:12:59.622518   73230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0906 20:12:59.622587   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0906 20:12:59.661561   73230 cri.go:89] found id: ""
	I0906 20:12:59.661590   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.661601   73230 logs.go:278] No container was found matching "kube-controller-manager"
	I0906 20:12:59.661608   73230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0906 20:12:59.661668   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0906 20:12:59.695703   73230 cri.go:89] found id: ""
	I0906 20:12:59.695732   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.695742   73230 logs.go:278] No container was found matching "kindnet"
	I0906 20:12:59.695749   73230 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0906 20:12:59.695808   73230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0906 20:12:59.739701   73230 cri.go:89] found id: ""
	I0906 20:12:59.739733   73230 logs.go:276] 0 containers: []
	W0906 20:12:59.739744   73230 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0906 20:12:59.739756   73230 logs.go:123] Gathering logs for container status ...
	I0906 20:12:59.739771   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0906 20:12:59.791400   73230 logs.go:123] Gathering logs for kubelet ...
	I0906 20:12:59.791428   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0906 20:12:59.851142   73230 logs.go:123] Gathering logs for dmesg ...
	I0906 20:12:59.851179   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0906 20:12:59.867242   73230 logs.go:123] Gathering logs for describe nodes ...
	I0906 20:12:59.867278   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0906 20:12:59.941041   73230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0906 20:12:59.941060   73230 logs.go:123] Gathering logs for CRI-O ...
	I0906 20:12:59.941071   73230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0906 20:13:00.061377   73230 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0906 20:13:00.061456   73230 out.go:270] * 
	W0906 20:13:00.061515   73230 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.061532   73230 out.go:270] * 
	W0906 20:13:00.062343   73230 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0906 20:13:00.065723   73230 out.go:201] 
	W0906 20:13:00.066968   73230 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0906 20:13:00.067028   73230 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0906 20:13:00.067059   73230 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0906 20:13:00.068497   73230 out.go:201] 
	
	
	==> CRI-O <==
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.684221977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654269684197389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8490d232-f29f-442d-a120-050b8ef909e6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.684844608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=346438e6-2828-48e5-a37a-5382e71b679f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.684914298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=346438e6-2828-48e5-a37a-5382e71b679f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.684953077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=346438e6-2828-48e5-a37a-5382e71b679f name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.718480273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e40927cc-8878-4357-8c80-cb95ff33ddad name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.718574689Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e40927cc-8878-4357-8c80-cb95ff33ddad name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.720520949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24a84983-d195-4cd1-8cc4-4491cabb8322 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.720947937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654269720920082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24a84983-d195-4cd1-8cc4-4491cabb8322 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.721638487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5dcbb87-12de-48e0-a036-497302f3d601 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.721708923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5dcbb87-12de-48e0-a036-497302f3d601 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.721818675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d5dcbb87-12de-48e0-a036-497302f3d601 name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.757802750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32adcadf-77b0-4464-910b-4d27ff379385 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.757913652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32adcadf-77b0-4464-910b-4d27ff379385 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.758948971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f626e85-4542-401c-a0ce-ba2991e0e074 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.759362423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654269759343297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f626e85-4542-401c-a0ce-ba2991e0e074 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.760046212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69ce162b-a189-4217-a8d7-c91a8c7c02fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.760113107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69ce162b-a189-4217-a8d7-c91a8c7c02fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.760151471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=69ce162b-a189-4217-a8d7-c91a8c7c02fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.796928456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4cf806c2-25e6-4192-8180-dfc4ba374b74 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.797029440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4cf806c2-25e6-4192-8180-dfc4ba374b74 name=/runtime.v1.RuntimeService/Version
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.799060390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8cabe394-82d4-40aa-b346-5c6dac880b14 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.799503531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1725654269799458838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cabe394-82d4-40aa-b346-5c6dac880b14 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.800185905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6568fe8-7a00-47f2-b532-e784ebc8c24e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.800260048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6568fe8-7a00-47f2-b532-e784ebc8c24e name=/runtime.v1.RuntimeService/ListContainers
	Sep 06 20:24:29 old-k8s-version-843298 crio[630]: time="2024-09-06 20:24:29.800292768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b6568fe8-7a00-47f2-b532-e784ebc8c24e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep 6 20:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050933] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039157] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.987920] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.571048] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.647123] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.681954] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.060444] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073389] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.178170] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.167558] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.279257] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +6.753089] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.068747] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.083570] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[Sep 6 20:05] kauditd_printk_skb: 46 callbacks suppressed
	[Sep 6 20:09] systemd-fstab-generator[5052]: Ignoring "noauto" option for root device
	[Sep 6 20:11] systemd-fstab-generator[5331]: Ignoring "noauto" option for root device
	[  +0.061919] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:24:29 up 19 min,  0 users,  load average: 0.07, 0.07, 0.06
	Linux old-k8s-version-843298 5.10.207 #1 SMP Tue Sep 3 21:45:30 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bd73b0, 0xc000132e40)
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: goroutine 156 [chan receive]:
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000cb2fc0)
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: goroutine 157 [select]:
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cb7ef0, 0x4f0ac20, 0xc0005ad360, 0x1, 0xc0001000c0)
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d8460, 0xc0001000c0)
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bd73f0, 0xc000132f80)
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 06 20:24:29 old-k8s-version-843298 kubelet[6824]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 06 20:24:29 old-k8s-version-843298 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 06 20:24:29 old-k8s-version-843298 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 2 (246.425621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-843298" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (144.51s)

                                                
                                    

Test pass (243/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.11
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 4.26
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 84.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 133.43
31 TestAddons/serial/GCPAuth/Namespaces 0.14
35 TestAddons/parallel/InspektorGadget 11.93
37 TestAddons/parallel/HelmTiller 13.07
39 TestAddons/parallel/CSI 49.55
40 TestAddons/parallel/Headlamp 16.7
41 TestAddons/parallel/CloudSpanner 6.57
42 TestAddons/parallel/LocalPath 8.14
43 TestAddons/parallel/NvidiaDevicePlugin 5.56
44 TestAddons/parallel/Yakd 10.79
45 TestAddons/StoppedEnableDisable 92.8
46 TestCertOptions 73.02
47 TestCertExpiration 316.66
49 TestForceSystemdFlag 59
50 TestForceSystemdEnv 58.56
52 TestKVMDriverInstallOrUpdate 1.2
56 TestErrorSpam/setup 42.99
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.75
59 TestErrorSpam/pause 1.55
60 TestErrorSpam/unpause 1.79
61 TestErrorSpam/stop 5.27
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 57.97
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 33.87
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
73 TestFunctional/serial/CacheCmd/cache/add_local 1.12
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 33.06
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.36
84 TestFunctional/serial/LogsFileCmd 1.44
85 TestFunctional/serial/InvalidService 4.25
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 12.69
89 TestFunctional/parallel/DryRun 0.25
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.75
95 TestFunctional/parallel/ServiceCmdConnect 11.76
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 37.18
99 TestFunctional/parallel/SSHCmd 0.48
100 TestFunctional/parallel/CpCmd 1.45
101 TestFunctional/parallel/MySQL 27.19
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 1.56
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
119 TestFunctional/parallel/ImageCommands/ImageBuild 5.97
120 TestFunctional/parallel/ImageCommands/Setup 0.42
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.78
123 TestFunctional/parallel/MountCmd/any-port 11.5
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.98
125 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.72
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.04
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
133 TestFunctional/parallel/ProfileCmd/profile_list 0.26
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
135 TestFunctional/parallel/MountCmd/specific-port 1.79
136 TestFunctional/parallel/ServiceCmd/List 0.34
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.93
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
139 TestFunctional/parallel/ServiceCmd/Format 0.42
140 TestFunctional/parallel/MountCmd/VerifyCleanup 0.9
141 TestFunctional/parallel/ServiceCmd/URL 0.41
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 202.64
158 TestMultiControlPlane/serial/DeployApp 5.37
159 TestMultiControlPlane/serial/PingHostFromPods 1.18
160 TestMultiControlPlane/serial/AddWorkerNode 53.63
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.78
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 229.85
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
174 TestMultiControlPlane/serial/AddSecondaryNode 74.04
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestJSONOutput/start/Command 58.97
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.68
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.62
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.37
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 89.41
211 TestMountStart/serial/StartWithMountFirst 27.64
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 24.42
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.69
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 23.22
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 118.89
223 TestMultiNode/serial/DeployApp2Nodes 6.09
224 TestMultiNode/serial/PingHostFrom2Pods 0.81
225 TestMultiNode/serial/AddNode 50.77
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.89
229 TestMultiNode/serial/StopNode 2.32
230 TestMultiNode/serial/StartAfterStop 36.38
232 TestMultiNode/serial/DeleteNode 2.18
234 TestMultiNode/serial/RestartMultiNode 184.14
235 TestMultiNode/serial/ValidateNameConflict 44.06
242 TestScheduledStopUnix 112.87
246 TestRunningBinaryUpgrade 195.49
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 100.85
260 TestNetworkPlugins/group/false 3.15
264 TestNoKubernetes/serial/StartWithStopK8s 40.39
265 TestNoKubernetes/serial/Start 28.53
266 TestStoppedBinaryUpgrade/Setup 0.44
267 TestStoppedBinaryUpgrade/Upgrade 150.15
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
269 TestNoKubernetes/serial/ProfileList 1.12
270 TestNoKubernetes/serial/Stop 1.29
271 TestNoKubernetes/serial/StartNoArgs 42.05
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
275 TestPause/serial/Start 76.94
283 TestNetworkPlugins/group/auto/Start 116.23
285 TestNetworkPlugins/group/kindnet/Start 65.4
286 TestNetworkPlugins/group/calico/Start 99.85
287 TestNetworkPlugins/group/auto/KubeletFlags 0.2
288 TestNetworkPlugins/group/auto/NetCatPod 10.24
289 TestNetworkPlugins/group/auto/DNS 0.15
290 TestNetworkPlugins/group/auto/Localhost 0.13
291 TestNetworkPlugins/group/auto/HairPin 0.13
292 TestNetworkPlugins/group/custom-flannel/Start 93.02
293 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
295 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
296 TestNetworkPlugins/group/kindnet/DNS 0.26
297 TestNetworkPlugins/group/kindnet/Localhost 0.14
298 TestNetworkPlugins/group/kindnet/HairPin 0.15
299 TestNetworkPlugins/group/enable-default-cni/Start 56.94
300 TestNetworkPlugins/group/flannel/Start 89.08
301 TestNetworkPlugins/group/calico/ControllerPod 6.01
302 TestNetworkPlugins/group/calico/KubeletFlags 0.2
303 TestNetworkPlugins/group/calico/NetCatPod 10.21
304 TestNetworkPlugins/group/calico/DNS 0.15
305 TestNetworkPlugins/group/calico/Localhost 0.17
306 TestNetworkPlugins/group/calico/HairPin 0.14
307 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
308 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.29
309 TestNetworkPlugins/group/bridge/Start 101.73
310 TestNetworkPlugins/group/custom-flannel/DNS 0.17
311 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
312 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.35
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
321 TestStartStop/group/no-preload/serial/FirstStart 94.98
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
324 TestNetworkPlugins/group/flannel/NetCatPod 12.2
325 TestNetworkPlugins/group/flannel/DNS 0.21
326 TestNetworkPlugins/group/flannel/Localhost 0.17
327 TestNetworkPlugins/group/flannel/HairPin 0.15
329 TestStartStop/group/embed-certs/serial/FirstStart 62.58
330 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
331 TestNetworkPlugins/group/bridge/NetCatPod 12.23
332 TestNetworkPlugins/group/bridge/DNS 0.15
333 TestNetworkPlugins/group/bridge/Localhost 0.13
334 TestNetworkPlugins/group/bridge/HairPin 0.13
336 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.2
337 TestStartStop/group/no-preload/serial/DeployApp 8.28
338 TestStartStop/group/embed-certs/serial/DeployApp 9.27
339 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
341 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
348 TestStartStop/group/no-preload/serial/SecondStart 686.78
349 TestStartStop/group/embed-certs/serial/SecondStart 604.5
353 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 585.43
354 TestStartStop/group/old-k8s-version/serial/Stop 5.48
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
366 TestStartStop/group/newest-cni/serial/FirstStart 47.9
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
369 TestStartStop/group/newest-cni/serial/Stop 10.54
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
371 TestStartStop/group/newest-cni/serial/SecondStart 36.94
372 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/newest-cni/serial/Pause 3.52
x
+
TestDownloadOnly/v1.20.0/json-events (11.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-726386 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-726386 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.114129755s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-726386
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-726386: exit status 85 (52.314ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |          |
	|         | -p download-only-726386        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:13
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:13.581110   13206 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:13.581240   13206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:13.581250   13206 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:13.581254   13206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:13.581436   13206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	W0906 18:29:13.581546   13206 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19576-6021/.minikube/config/config.json: open /home/jenkins/minikube-integration/19576-6021/.minikube/config/config.json: no such file or directory
	I0906 18:29:13.582086   13206 out.go:352] Setting JSON to true
	I0906 18:29:13.582938   13206 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":703,"bootTime":1725646651,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:13.583005   13206 start.go:139] virtualization: kvm guest
	I0906 18:29:13.585332   13206 out.go:97] [download-only-726386] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0906 18:29:13.585422   13206 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 18:29:13.585459   13206 notify.go:220] Checking for updates...
	I0906 18:29:13.586814   13206 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:13.588222   13206 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:13.589584   13206 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:29:13.590813   13206 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:13.591879   13206 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0906 18:29:13.593877   13206 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 18:29:13.594093   13206 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:13.691027   13206 out.go:97] Using the kvm2 driver based on user configuration
	I0906 18:29:13.691057   13206 start.go:297] selected driver: kvm2
	I0906 18:29:13.691065   13206 start.go:901] validating driver "kvm2" against <nil>
	I0906 18:29:13.691374   13206 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:13.691481   13206 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19576-6021/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0906 18:29:13.706477   13206 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0906 18:29:13.706553   13206 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:13.707028   13206 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0906 18:29:13.707164   13206 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 18:29:13.707220   13206 cni.go:84] Creating CNI manager for ""
	I0906 18:29:13.707233   13206 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0906 18:29:13.707240   13206 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:13.707315   13206 start.go:340] cluster config:
	{Name:download-only-726386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-726386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:29:13.707488   13206 iso.go:125] acquiring lock: {Name:mk1321fa8899c9f525734390a9e3f83f593ffe5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:13.709423   13206 out.go:97] Downloading VM boot image ...
	I0906 18:29:13.709459   13206 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19576-6021/.minikube/cache/iso/amd64/minikube-v1.34.0-amd64.iso
	I0906 18:29:17.341344   13206 out.go:97] Starting "download-only-726386" primary control-plane node in "download-only-726386" cluster
	I0906 18:29:17.341372   13206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 18:29:17.366141   13206 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 18:29:17.366188   13206 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:17.366355   13206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0906 18:29:17.368204   13206 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0906 18:29:17.368228   13206 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0906 18:29:17.396068   13206 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0906 18:29:23.255529   13206 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0906 18:29:23.255633   13206 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19576-6021/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-726386 host does not exist
	  To start a cluster, run: "minikube start -p download-only-726386"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-726386
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-693029 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-693029 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.254933586s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-693029
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-693029: exit status 85 (57.758202ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-726386        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-726386        | download-only-726386 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -o=json --download-only        | download-only-693029 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-693029        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:25.003250   13415 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:25.003382   13415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:25.003418   13415 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:25.003445   13415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:25.003764   13415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:29:25.004331   13415 out.go:352] Setting JSON to true
	I0906 18:29:25.005166   13415 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":714,"bootTime":1725646651,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:29:25.005221   13415 start.go:139] virtualization: kvm guest
	I0906 18:29:25.007401   13415 out.go:97] [download-only-693029] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:29:25.007511   13415 notify.go:220] Checking for updates...
	I0906 18:29:25.008831   13415 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:25.010182   13415 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:25.011353   13415 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:29:25.012478   13415 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:29:25.013551   13415 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-693029 host does not exist
	  To start a cluster, run: "minikube start -p download-only-693029"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-693029
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-071210 --alsologtostderr --binary-mirror http://127.0.0.1:42457 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-071210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-071210
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (84.56s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-891908 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-891908 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.540091714s)
helpers_test.go:175: Cleaning up "offline-crio-891908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-891908
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-891908: (1.019589336s)
--- PASS: TestOffline (84.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-959832
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-959832: exit status 85 (50.333933ms)

                                                
                                                
-- stdout --
	* Profile "addons-959832" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-959832"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-959832
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-959832: exit status 85 (50.694824ms)

                                                
                                                
-- stdout --
	* Profile "addons-959832" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-959832"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (133.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-959832 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-959832 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m13.432799232s)
--- PASS: TestAddons/Setup (133.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-959832 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-959832 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lqq9b" [502295f7-0b10-4f01-86c6-7c5a18b591ec] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004227941s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-959832
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-959832: (5.929679103s)
--- PASS: TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.07s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.507895ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-d2ggh" [5951b042-9892-4eb8-b567-933475c4a163] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004529359s
addons_test.go:475: (dbg) Run:  kubectl --context addons-959832 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-959832 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.437256421s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.094374ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-959832 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-959832 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [15910977-0ec8-40fe-9a51-3a9f4e4624cd] Pending
helpers_test.go:344: "task-pv-pod" [15910977-0ec8-40fe-9a51-3a9f4e4624cd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [15910977-0ec8-40fe-9a51-3a9f4e4624cd] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.005171612s
addons_test.go:590: (dbg) Run:  kubectl --context addons-959832 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-959832 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-959832 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-959832 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-959832 delete pod task-pv-pod: (1.164388065s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-959832 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-959832 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-959832 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8f81d23f-9720-4f6f-9f69-31129d5cc149] Pending
helpers_test.go:344: "task-pv-pod-restore" [8f81d23f-9720-4f6f-9f69-31129d5cc149] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8f81d23f-9720-4f6f-9f69-31129d5cc149] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004417806s
addons_test.go:632: (dbg) Run:  kubectl --context addons-959832 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-959832 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-959832 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-959832 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.778799574s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-959832 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-959832 --alsologtostderr -v=1: (1.008534596s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-gnq72" [80a6831f-da27-4545-b055-5b15078d1cc8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-gnq72" [80a6831f-da27-4545-b055-5b15078d1cc8] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004971687s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-959832 addons disable headlamp --alsologtostderr -v=1: (5.683334889s)
--- PASS: TestAddons/parallel/Headlamp (16.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-zh76q" [79327e55-0b23-469f-bdc9-0611cfa8a848] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00395606s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-959832
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-959832 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-959832 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-959832 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [754a36f2-796a-43db-86bb-d5a98787bdac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [754a36f2-796a-43db-86bb-d5a98787bdac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [754a36f2-796a-43db-86bb-d5a98787bdac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003347848s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-959832 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 ssh "cat /opt/local-path-provisioner/pvc-d025f5f2-5e2f-4f70-8eee-6bc1c0e53cc9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-959832 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-959832 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nsxpz" [c35f7718-6879-4edb-9a8b-5b4a82ad2a7c] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004281937s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-959832
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2l828" [7ac00c1c-d26e-4f08-b91c-49baa60d8def] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004989863s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-959832 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-959832 addons disable yakd --alsologtostderr -v=1: (5.784774897s)
--- PASS: TestAddons/parallel/Yakd (10.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.8s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-959832
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-959832: (1m32.542484501s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-959832
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-959832
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-959832
--- PASS: TestAddons/StoppedEnableDisable (92.80s)

                                                
                                    
x
+
TestCertOptions (73.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-417185 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-417185 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m11.761603688s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-417185 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-417185 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-417185 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-417185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-417185
--- PASS: TestCertOptions (73.02s)

                                                
                                    
x
+
TestCertExpiration (316.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-097103 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-097103 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m3.499967016s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-097103 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-097103 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m11.977024889s)
helpers_test.go:175: Cleaning up "cert-expiration-097103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-097103
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-097103: (1.1782161s)
--- PASS: TestCertExpiration (316.66s)

                                                
                                    
x
+
TestForceSystemdFlag (59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-689823 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-689823 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.999310358s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-689823 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-689823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-689823
--- PASS: TestForceSystemdFlag (59.00s)

                                                
                                    
x
+
TestForceSystemdEnv (58.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-924715 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-924715 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.733195084s)
helpers_test.go:175: Cleaning up "force-systemd-env-924715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-924715
--- PASS: TestForceSystemdEnv (58.56s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.20s)

                                                
                                    
x
+
TestErrorSpam/setup (42.99s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-089098 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-089098 --driver=kvm2  --container-runtime=crio
E0906 18:46:44.179004   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:44.185936   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:44.197367   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:44.218822   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:44.260287   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:44.341721   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:44.503240   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:44.825094   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:45.467210   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:46.748652   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:49.311548   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:46:54.433458   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:04.674791   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-089098 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-089098 --driver=kvm2  --container-runtime=crio: (42.986968303s)
--- PASS: TestErrorSpam/setup (42.99s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (5.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 stop
E0906 18:47:25.156218   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 stop: (2.308477577s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 stop: (1.362256666s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-089098 --log_dir /tmp/nospam-089098 stop: (1.599357184s)
--- PASS: TestErrorSpam/stop (5.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19576-6021/.minikube/files/etc/test/nested/copy/13178/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-206035 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0906 18:48:06.118907   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-206035 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.971762394s)
--- PASS: TestFunctional/serial/StartWithProxy (57.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-206035 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-206035 --alsologtostderr -v=8: (33.865468352s)
functional_test.go:663: soft start took 33.866174001s for "functional-206035" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-206035 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-206035 cache add registry.k8s.io/pause:3.1: (1.090998494s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-206035 cache add registry.k8s.io/pause:3.3: (1.187698016s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-206035 cache add registry.k8s.io/pause:latest: (1.165602252s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-206035 /tmp/TestFunctionalserialCacheCmdcacheadd_local3607858635/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 cache add minikube-local-cache-test:functional-206035
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 cache delete minikube-local-cache-test:functional-206035
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-206035
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-206035 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.215826ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-206035 cache reload: (1.016695739s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 kubectl -- --context functional-206035 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-206035 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-206035 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0906 18:49:28.042272   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-206035 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.057191087s)
functional_test.go:761: restart took 33.057311653s for "functional-206035" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.06s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-206035 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-206035 logs: (1.35720254s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 logs --file /tmp/TestFunctionalserialLogsFileCmd1831700654/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-206035 logs --file /tmp/TestFunctionalserialLogsFileCmd1831700654/001/logs.txt: (1.439307108s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-206035 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-206035
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-206035: exit status 115 (282.503723ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.3:30133 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-206035 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-206035 config get cpus: exit status 14 (48.883412ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-206035 config get cpus: exit status 14 (60.97269ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-206035 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-206035 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23188: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.69s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-206035 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-206035 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (128.788815ms)

                                                
                                                
-- stdout --
	* [functional-206035] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:49:58.943618   22821 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:49:58.943877   22821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:49:58.943888   22821 out.go:358] Setting ErrFile to fd 2...
	I0906 18:49:58.943892   22821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:49:58.944069   22821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:49:58.944615   22821 out.go:352] Setting JSON to false
	I0906 18:49:58.945548   22821 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1948,"bootTime":1725646651,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:49:58.945613   22821 start.go:139] virtualization: kvm guest
	I0906 18:49:58.947581   22821 out.go:177] * [functional-206035] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 18:49:58.948848   22821 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:49:58.948863   22821 notify.go:220] Checking for updates...
	I0906 18:49:58.951416   22821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:49:58.952762   22821 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:49:58.954021   22821 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:49:58.955165   22821 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:49:58.956316   22821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:49:58.957858   22821 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:49:58.958456   22821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:49:58.958516   22821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:49:58.973526   22821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I0906 18:49:58.973903   22821 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:49:58.974430   22821 main.go:141] libmachine: Using API Version  1
	I0906 18:49:58.974450   22821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:49:58.974799   22821 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:49:58.974971   22821 main.go:141] libmachine: (functional-206035) Calling .DriverName
	I0906 18:49:58.975209   22821 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:49:58.975509   22821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:49:58.975542   22821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:49:58.990146   22821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0906 18:49:58.990593   22821 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:49:58.991125   22821 main.go:141] libmachine: Using API Version  1
	I0906 18:49:58.991153   22821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:49:58.991446   22821 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:49:58.991637   22821 main.go:141] libmachine: (functional-206035) Calling .DriverName
	I0906 18:49:59.024603   22821 out.go:177] * Using the kvm2 driver based on existing profile
	I0906 18:49:59.025793   22821 start.go:297] selected driver: kvm2
	I0906 18:49:59.025807   22821 start.go:901] validating driver "kvm2" against &{Name:functional-206035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-206035 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:49:59.025924   22821 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:49:59.027935   22821 out.go:201] 
	W0906 18:49:59.029079   22821 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 18:49:59.030233   22821 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-206035 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-206035 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-206035 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (131.303381ms)

                                                
                                                
-- stdout --
	* [functional-206035] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:49:58.061710   22703 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:49:58.061835   22703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:49:58.061843   22703 out.go:358] Setting ErrFile to fd 2...
	I0906 18:49:58.061850   22703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:49:58.062119   22703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 18:49:58.062612   22703 out.go:352] Setting JSON to false
	I0906 18:49:58.063543   22703 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1947,"bootTime":1725646651,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 18:49:58.063598   22703 start.go:139] virtualization: kvm guest
	I0906 18:49:58.065576   22703 out.go:177] * [functional-206035] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0906 18:49:58.066762   22703 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:49:58.066770   22703 notify.go:220] Checking for updates...
	I0906 18:49:58.068838   22703 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:49:58.070004   22703 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 18:49:58.071287   22703 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 18:49:58.072430   22703 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 18:49:58.073577   22703 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:49:58.075249   22703 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 18:49:58.075844   22703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:49:58.075897   22703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:49:58.091709   22703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39405
	I0906 18:49:58.092090   22703 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:49:58.092594   22703 main.go:141] libmachine: Using API Version  1
	I0906 18:49:58.092614   22703 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:49:58.092968   22703 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:49:58.093125   22703 main.go:141] libmachine: (functional-206035) Calling .DriverName
	I0906 18:49:58.093359   22703 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:49:58.093633   22703 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 18:49:58.093663   22703 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 18:49:58.108410   22703 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39069
	I0906 18:49:58.108774   22703 main.go:141] libmachine: () Calling .GetVersion
	I0906 18:49:58.109234   22703 main.go:141] libmachine: Using API Version  1
	I0906 18:49:58.109261   22703 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 18:49:58.109584   22703 main.go:141] libmachine: () Calling .GetMachineName
	I0906 18:49:58.109763   22703 main.go:141] libmachine: (functional-206035) Calling .DriverName
	I0906 18:49:58.144636   22703 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0906 18:49:58.145892   22703 start.go:297] selected driver: kvm2
	I0906 18:49:58.145911   22703 start.go:901] validating driver "kvm2" against &{Name:functional-206035 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.34.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-206035 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.3 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:49:58.146005   22703 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:49:58.148000   22703 out.go:201] 
	W0906 18:49:58.149300   22703 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 18:49:58.150670   22703 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-206035 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-206035 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-d9dpd" [aeeb7960-91cf-4644-a1bd-dc774c4f4079] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-d9dpd" [aeeb7960-91cf-4644-a1bd-dc774c4f4079] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004284841s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.3:30695
functional_test.go:1675: http://192.168.39.3:30695: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-d9dpd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.3:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.3:30695
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [474a3b8f-9141-4055-8e34-2d2bb9082a53] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003762279s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-206035 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-206035 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-206035 get pvc myclaim -o=json
2024/09/06 18:50:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-206035 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-206035 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aef30f2d-2b49-407c-bc72-624e759a1c23] Pending
helpers_test.go:344: "sp-pod" [aef30f2d-2b49-407c-bc72-624e759a1c23] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aef30f2d-2b49-407c-bc72-624e759a1c23] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.00442743s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-206035 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-206035 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-206035 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [663bf4ee-a525-4416-ae49-08d4fab90988] Pending
helpers_test.go:344: "sp-pod" [663bf4ee-a525-4416-ae49-08d4fab90988] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003552294s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-206035 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh -n functional-206035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 cp functional-206035:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd715336515/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh -n functional-206035 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh -n functional-206035 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-206035 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-qhglb" [b3780233-0e36-45ad-bb5f-d4419e9837a8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-qhglb" [b3780233-0e36-45ad-bb5f-d4419e9837a8] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.003404273s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-206035 exec mysql-6cdb49bbb-qhglb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-206035 exec mysql-6cdb49bbb-qhglb -- mysql -ppassword -e "show databases;": exit status 1 (140.378289ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-206035 exec mysql-6cdb49bbb-qhglb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-206035 exec mysql-6cdb49bbb-qhglb -- mysql -ppassword -e "show databases;": exit status 1 (141.001274ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-206035 exec mysql-6cdb49bbb-qhglb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13178/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo cat /etc/test/nested/copy/13178/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13178.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo cat /etc/ssl/certs/13178.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13178.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo cat /usr/share/ca-certificates/13178.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/131782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo cat /etc/ssl/certs/131782.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/131782.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo cat /usr/share/ca-certificates/131782.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-206035 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-206035 ssh "sudo systemctl is-active docker": exit status 1 (228.60007ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-206035 ssh "sudo systemctl is-active containerd": exit status 1 (289.609638ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-206035 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-206035
localhost/kicbase/echo-server:functional-206035
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-206035 image ls --format short --alsologtostderr:
I0906 18:50:04.911092   24094 out.go:345] Setting OutFile to fd 1 ...
I0906 18:50:04.911400   24094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:04.911413   24094 out.go:358] Setting ErrFile to fd 2...
I0906 18:50:04.911420   24094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:04.911693   24094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
I0906 18:50:04.912468   24094 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:04.912606   24094 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:04.913157   24094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:04.913218   24094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:04.929104   24094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34795
I0906 18:50:04.929564   24094 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:04.930176   24094 main.go:141] libmachine: Using API Version  1
I0906 18:50:04.930202   24094 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:04.930518   24094 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:04.930713   24094 main.go:141] libmachine: (functional-206035) Calling .GetState
I0906 18:50:04.932617   24094 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:04.932655   24094 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:04.948128   24094 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36041
I0906 18:50:04.948583   24094 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:04.949154   24094 main.go:141] libmachine: Using API Version  1
I0906 18:50:04.949185   24094 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:04.949473   24094 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:04.949652   24094 main.go:141] libmachine: (functional-206035) Calling .DriverName
I0906 18:50:04.949861   24094 ssh_runner.go:195] Run: systemctl --version
I0906 18:50:04.949886   24094 main.go:141] libmachine: (functional-206035) Calling .GetSSHHostname
I0906 18:50:04.953068   24094 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:04.953463   24094 main.go:141] libmachine: (functional-206035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:3d:58", ip: ""} in network mk-functional-206035: {Iface:virbr1 ExpiryTime:2024-09-06 19:47:44 +0000 UTC Type:0 Mac:52:54:00:c8:3d:58 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:functional-206035 Clientid:01:52:54:00:c8:3d:58}
I0906 18:50:04.953498   24094 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined IP address 192.168.39.3 and MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:04.953647   24094 main.go:141] libmachine: (functional-206035) Calling .GetSSHPort
I0906 18:50:04.953803   24094 main.go:141] libmachine: (functional-206035) Calling .GetSSHKeyPath
I0906 18:50:04.953958   24094 main.go:141] libmachine: (functional-206035) Calling .GetSSHUsername
I0906 18:50:04.954100   24094 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/functional-206035/id_rsa Username:docker}
I0906 18:50:05.069807   24094 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 18:50:05.137833   24094 main.go:141] libmachine: Making call to close driver server
I0906 18:50:05.137845   24094 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:05.138132   24094 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:05.138154   24094 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:50:05.138163   24094 main.go:141] libmachine: Making call to close driver server
I0906 18:50:05.138172   24094 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:05.138170   24094 main.go:141] libmachine: (functional-206035) DBG | Closing plugin on server side
I0906 18:50:05.138390   24094 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:05.138402   24094 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:50:05.138419   24094 main.go:141] libmachine: (functional-206035) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-206035 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-206035  | 759a70b39e80e | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| localhost/kicbase/echo-server           | functional-206035  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-206035 image ls --format table --alsologtostderr:
I0906 18:50:05.462189   24157 out.go:345] Setting OutFile to fd 1 ...
I0906 18:50:05.462496   24157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:05.462509   24157 out.go:358] Setting ErrFile to fd 2...
I0906 18:50:05.462515   24157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:05.462792   24157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
I0906 18:50:05.463561   24157 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:05.463714   24157 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:05.464267   24157 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:05.464320   24157 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:05.479347   24157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
I0906 18:50:05.479783   24157 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:05.480346   24157 main.go:141] libmachine: Using API Version  1
I0906 18:50:05.480371   24157 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:05.480731   24157 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:05.480939   24157 main.go:141] libmachine: (functional-206035) Calling .GetState
I0906 18:50:05.482809   24157 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:05.482844   24157 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:05.497956   24157 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45941
I0906 18:50:05.498423   24157 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:05.498920   24157 main.go:141] libmachine: Using API Version  1
I0906 18:50:05.498945   24157 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:05.499250   24157 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:05.499420   24157 main.go:141] libmachine: (functional-206035) Calling .DriverName
I0906 18:50:05.499618   24157 ssh_runner.go:195] Run: systemctl --version
I0906 18:50:05.499656   24157 main.go:141] libmachine: (functional-206035) Calling .GetSSHHostname
I0906 18:50:05.502584   24157 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:05.502968   24157 main.go:141] libmachine: (functional-206035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:3d:58", ip: ""} in network mk-functional-206035: {Iface:virbr1 ExpiryTime:2024-09-06 19:47:44 +0000 UTC Type:0 Mac:52:54:00:c8:3d:58 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:functional-206035 Clientid:01:52:54:00:c8:3d:58}
I0906 18:50:05.502999   24157 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined IP address 192.168.39.3 and MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:05.503119   24157 main.go:141] libmachine: (functional-206035) Calling .GetSSHPort
I0906 18:50:05.503286   24157 main.go:141] libmachine: (functional-206035) Calling .GetSSHKeyPath
I0906 18:50:05.503464   24157 main.go:141] libmachine: (functional-206035) Calling .GetSSHUsername
I0906 18:50:05.503582   24157 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/functional-206035/id_rsa Username:docker}
I0906 18:50:05.596488   24157 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 18:50:05.668562   24157 main.go:141] libmachine: Making call to close driver server
I0906 18:50:05.668583   24157 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:05.668914   24157 main.go:141] libmachine: (functional-206035) DBG | Closing plugin on server side
I0906 18:50:05.668945   24157 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:05.668955   24157 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:50:05.668965   24157 main.go:141] libmachine: Making call to close driver server
I0906 18:50:05.668972   24157 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:05.669182   24157 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:05.669204   24157 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-206035 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d
328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-206035"],"size":"4943877"},{"id":"759a70b39e80e54f0b6f72bbb4bdfdeb6565bed5db7e22304c1bbb3343c31a43","repoDigests":["localhost/minikube-local-cache-test@sha256:aa268834d1c912385f82bcaf5faa501c27bc64945957040da3b025604808601a"],"repoTags":["localhost/minikube-local-cache-test:functional-206035"],"size":"3330"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a812
0d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.
io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf699
9452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a832
1e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-206035 image ls --format json --alsologtostderr:
I0906 18:50:05.195626   24133 out.go:345] Setting OutFile to fd 1 ...
I0906 18:50:05.195898   24133 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:05.195910   24133 out.go:358] Setting ErrFile to fd 2...
I0906 18:50:05.195917   24133 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:05.196196   24133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
I0906 18:50:05.196994   24133 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:05.197132   24133 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:05.197711   24133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:05.197762   24133 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:05.212401   24133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
I0906 18:50:05.212915   24133 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:05.213501   24133 main.go:141] libmachine: Using API Version  1
I0906 18:50:05.213524   24133 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:05.213873   24133 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:05.214064   24133 main.go:141] libmachine: (functional-206035) Calling .GetState
I0906 18:50:05.215825   24133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:05.215872   24133 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:05.231269   24133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46233
I0906 18:50:05.231706   24133 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:05.232344   24133 main.go:141] libmachine: Using API Version  1
I0906 18:50:05.232374   24133 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:05.232673   24133 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:05.232879   24133 main.go:141] libmachine: (functional-206035) Calling .DriverName
I0906 18:50:05.233088   24133 ssh_runner.go:195] Run: systemctl --version
I0906 18:50:05.233113   24133 main.go:141] libmachine: (functional-206035) Calling .GetSSHHostname
I0906 18:50:05.236019   24133 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:05.236414   24133 main.go:141] libmachine: (functional-206035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:3d:58", ip: ""} in network mk-functional-206035: {Iface:virbr1 ExpiryTime:2024-09-06 19:47:44 +0000 UTC Type:0 Mac:52:54:00:c8:3d:58 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:functional-206035 Clientid:01:52:54:00:c8:3d:58}
I0906 18:50:05.236444   24133 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined IP address 192.168.39.3 and MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:05.236580   24133 main.go:141] libmachine: (functional-206035) Calling .GetSSHPort
I0906 18:50:05.236753   24133 main.go:141] libmachine: (functional-206035) Calling .GetSSHKeyPath
I0906 18:50:05.236928   24133 main.go:141] libmachine: (functional-206035) Calling .GetSSHUsername
I0906 18:50:05.237076   24133 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/functional-206035/id_rsa Username:docker}
I0906 18:50:05.356099   24133 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 18:50:05.408124   24133 main.go:141] libmachine: Making call to close driver server
I0906 18:50:05.408139   24133 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:05.408454   24133 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:05.408480   24133 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:50:05.408496   24133 main.go:141] libmachine: (functional-206035) DBG | Closing plugin on server side
I0906 18:50:05.408615   24133 main.go:141] libmachine: Making call to close driver server
I0906 18:50:05.408643   24133 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:05.408899   24133 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:05.408924   24133 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:50:05.408949   24133 main.go:141] libmachine: (functional-206035) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-206035 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-206035
size: "4943877"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 759a70b39e80e54f0b6f72bbb4bdfdeb6565bed5db7e22304c1bbb3343c31a43
repoDigests:
- localhost/minikube-local-cache-test@sha256:aa268834d1c912385f82bcaf5faa501c27bc64945957040da3b025604808601a
repoTags:
- localhost/minikube-local-cache-test:functional-206035
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-206035 image ls --format yaml --alsologtostderr:
I0906 18:50:05.723644   24182 out.go:345] Setting OutFile to fd 1 ...
I0906 18:50:05.723780   24182 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:05.723791   24182 out.go:358] Setting ErrFile to fd 2...
I0906 18:50:05.723797   24182 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:05.724054   24182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
I0906 18:50:05.724784   24182 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:05.724931   24182 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:05.725348   24182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:05.725389   24182 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:05.740669   24182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
I0906 18:50:05.741192   24182 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:05.741828   24182 main.go:141] libmachine: Using API Version  1
I0906 18:50:05.741861   24182 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:05.742245   24182 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:05.742434   24182 main.go:141] libmachine: (functional-206035) Calling .GetState
I0906 18:50:05.744258   24182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:05.744298   24182 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:05.758990   24182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45899
I0906 18:50:05.759357   24182 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:05.759842   24182 main.go:141] libmachine: Using API Version  1
I0906 18:50:05.759860   24182 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:05.760223   24182 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:05.760430   24182 main.go:141] libmachine: (functional-206035) Calling .DriverName
I0906 18:50:05.760640   24182 ssh_runner.go:195] Run: systemctl --version
I0906 18:50:05.760675   24182 main.go:141] libmachine: (functional-206035) Calling .GetSSHHostname
I0906 18:50:05.763170   24182 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:05.763520   24182 main.go:141] libmachine: (functional-206035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:3d:58", ip: ""} in network mk-functional-206035: {Iface:virbr1 ExpiryTime:2024-09-06 19:47:44 +0000 UTC Type:0 Mac:52:54:00:c8:3d:58 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:functional-206035 Clientid:01:52:54:00:c8:3d:58}
I0906 18:50:05.763550   24182 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined IP address 192.168.39.3 and MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:05.763664   24182 main.go:141] libmachine: (functional-206035) Calling .GetSSHPort
I0906 18:50:05.763821   24182 main.go:141] libmachine: (functional-206035) Calling .GetSSHKeyPath
I0906 18:50:05.763962   24182 main.go:141] libmachine: (functional-206035) Calling .GetSSHUsername
I0906 18:50:05.764097   24182 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/functional-206035/id_rsa Username:docker}
I0906 18:50:05.907764   24182 ssh_runner.go:195] Run: sudo crictl images --output json
I0906 18:50:05.972017   24182 main.go:141] libmachine: Making call to close driver server
I0906 18:50:05.972029   24182 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:05.972372   24182 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:05.972383   24182 main.go:141] libmachine: (functional-206035) DBG | Closing plugin on server side
I0906 18:50:05.972392   24182 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:50:05.972427   24182 main.go:141] libmachine: Making call to close driver server
I0906 18:50:05.972439   24182 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:05.972726   24182 main.go:141] libmachine: (functional-206035) DBG | Closing plugin on server side
I0906 18:50:05.972761   24182 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:05.972773   24182 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-206035 ssh pgrep buildkitd: exit status 1 (213.249689ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image build -t localhost/my-image:functional-206035 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-206035 image build -t localhost/my-image:functional-206035 testdata/build --alsologtostderr: (5.25752642s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-206035 image build -t localhost/my-image:functional-206035 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ff20823b0f0
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-206035
--> cdf61ee074c
Successfully tagged localhost/my-image:functional-206035
cdf61ee074ce0a22d6f882595a6e8c6c069d7bb933a552689df0d5ac934b8c76
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-206035 image build -t localhost/my-image:functional-206035 testdata/build --alsologtostderr:
I0906 18:50:06.238993   24236 out.go:345] Setting OutFile to fd 1 ...
I0906 18:50:06.239266   24236 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:06.239276   24236 out.go:358] Setting ErrFile to fd 2...
I0906 18:50:06.239282   24236 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:50:06.239454   24236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
I0906 18:50:06.239993   24236 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:06.240549   24236 config.go:182] Loaded profile config "functional-206035": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0906 18:50:06.240953   24236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:06.241002   24236 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:06.257984   24236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
I0906 18:50:06.258461   24236 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:06.259036   24236 main.go:141] libmachine: Using API Version  1
I0906 18:50:06.259058   24236 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:06.259472   24236 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:06.259684   24236 main.go:141] libmachine: (functional-206035) Calling .GetState
I0906 18:50:06.261606   24236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0906 18:50:06.261653   24236 main.go:141] libmachine: Launching plugin server for driver kvm2
I0906 18:50:06.281022   24236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34243
I0906 18:50:06.281436   24236 main.go:141] libmachine: () Calling .GetVersion
I0906 18:50:06.281883   24236 main.go:141] libmachine: Using API Version  1
I0906 18:50:06.281903   24236 main.go:141] libmachine: () Calling .SetConfigRaw
I0906 18:50:06.282223   24236 main.go:141] libmachine: () Calling .GetMachineName
I0906 18:50:06.282393   24236 main.go:141] libmachine: (functional-206035) Calling .DriverName
I0906 18:50:06.282591   24236 ssh_runner.go:195] Run: systemctl --version
I0906 18:50:06.282611   24236 main.go:141] libmachine: (functional-206035) Calling .GetSSHHostname
I0906 18:50:06.285102   24236 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:06.285456   24236 main.go:141] libmachine: (functional-206035) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:3d:58", ip: ""} in network mk-functional-206035: {Iface:virbr1 ExpiryTime:2024-09-06 19:47:44 +0000 UTC Type:0 Mac:52:54:00:c8:3d:58 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:functional-206035 Clientid:01:52:54:00:c8:3d:58}
I0906 18:50:06.285489   24236 main.go:141] libmachine: (functional-206035) DBG | domain functional-206035 has defined IP address 192.168.39.3 and MAC address 52:54:00:c8:3d:58 in network mk-functional-206035
I0906 18:50:06.285556   24236 main.go:141] libmachine: (functional-206035) Calling .GetSSHPort
I0906 18:50:06.285750   24236 main.go:141] libmachine: (functional-206035) Calling .GetSSHKeyPath
I0906 18:50:06.285906   24236 main.go:141] libmachine: (functional-206035) Calling .GetSSHUsername
I0906 18:50:06.286065   24236 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/functional-206035/id_rsa Username:docker}
I0906 18:50:06.403636   24236 build_images.go:161] Building image from path: /tmp/build.1902887978.tar
I0906 18:50:06.403724   24236 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 18:50:06.416718   24236 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1902887978.tar
I0906 18:50:06.433047   24236 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1902887978.tar: stat -c "%s %y" /var/lib/minikube/build/build.1902887978.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1902887978.tar': No such file or directory
I0906 18:50:06.433081   24236 ssh_runner.go:362] scp /tmp/build.1902887978.tar --> /var/lib/minikube/build/build.1902887978.tar (3072 bytes)
I0906 18:50:06.474960   24236 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1902887978
I0906 18:50:06.506886   24236 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1902887978 -xf /var/lib/minikube/build/build.1902887978.tar
I0906 18:50:06.519238   24236 crio.go:315] Building image: /var/lib/minikube/build/build.1902887978
I0906 18:50:06.519304   24236 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-206035 /var/lib/minikube/build/build.1902887978 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0906 18:50:11.348793   24236 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-206035 /var/lib/minikube/build/build.1902887978 --cgroup-manager=cgroupfs: (4.829464716s)
I0906 18:50:11.348897   24236 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1902887978
I0906 18:50:11.404029   24236 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1902887978.tar
I0906 18:50:11.442380   24236 build_images.go:217] Built localhost/my-image:functional-206035 from /tmp/build.1902887978.tar
I0906 18:50:11.442417   24236 build_images.go:133] succeeded building to: functional-206035
I0906 18:50:11.442423   24236 build_images.go:134] failed building to: 
I0906 18:50:11.442453   24236 main.go:141] libmachine: Making call to close driver server
I0906 18:50:11.442465   24236 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:11.442822   24236 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:11.442842   24236 main.go:141] libmachine: Making call to close connection to plugin binary
I0906 18:50:11.442854   24236 main.go:141] libmachine: Making call to close driver server
I0906 18:50:11.442863   24236 main.go:141] libmachine: (functional-206035) Calling .Close
I0906 18:50:11.443195   24236 main.go:141] libmachine: Successfully made call to close driver server
I0906 18:50:11.443197   24236 main.go:141] libmachine: (functional-206035) DBG | Closing plugin on server side
I0906 18:50:11.443213   24236 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-206035
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdany-port3430671545/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725648588822383599" to /tmp/TestFunctionalparallelMountCmdany-port3430671545/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725648588822383599" to /tmp/TestFunctionalparallelMountCmdany-port3430671545/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725648588822383599" to /tmp/TestFunctionalparallelMountCmdany-port3430671545/001/test-1725648588822383599
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-206035 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.530934ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 18:49 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 18:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 18:49 test-1725648588822383599
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh cat /mount-9p/test-1725648588822383599
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-206035 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [65fc2c46-07fe-44ed-ac4d-eb48c8b46acc] Pending
helpers_test.go:344: "busybox-mount" [65fc2c46-07fe-44ed-ac4d-eb48c8b46acc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [65fc2c46-07fe-44ed-ac4d-eb48c8b46acc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [65fc2c46-07fe-44ed-ac4d-eb48c8b46acc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.005171108s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-206035 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdany-port3430671545/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image load --daemon kicbase/echo-server:functional-206035 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-206035 image load --daemon kicbase/echo-server:functional-206035 --alsologtostderr: (1.695589204s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-206035 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-206035 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-8r7nb" [58a359de-0577-4d5c-8426-5bcfe700e75d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-8r7nb" [58a359de-0577-4d5c-8426-5bcfe700e75d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.0180631s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image load --daemon kicbase/echo-server:functional-206035 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-206035 image load --daemon kicbase/echo-server:functional-206035 --alsologtostderr: (2.374933215s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-206035
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image load --daemon kicbase/echo-server:functional-206035 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image save kicbase/echo-server:functional-206035 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image rm kicbase/echo-server:functional-206035 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-206035
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 image save --daemon kicbase/echo-server:functional-206035 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-206035
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "210.89066ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.308571ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "210.715907ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "42.620433ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdspecific-port2116869288/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-206035 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (197.034797ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdspecific-port2116869288/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-206035 ssh "sudo umount -f /mount-9p": exit status 1 (222.346018ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-206035 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdspecific-port2116869288/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 service list -o json
functional_test.go:1494: Took "926.204423ms" to run "out/minikube-linux-amd64 -p functional-206035 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.3:31236
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup622264649/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup622264649/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup622264649/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-206035 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup622264649/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup622264649/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-206035 /tmp/TestFunctionalparallelMountCmdVerifyCleanup622264649/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-206035 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.3:31236
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-206035
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-206035
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-206035
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-313128 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0906 18:51:44.178843   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:11.885064   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-313128 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m21.978034875s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (202.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-313128 -- rollout status deployment/busybox: (3.219277806s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-54m66 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-k99v6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-s2cgz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-54m66 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-k99v6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-s2cgz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-54m66 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-k99v6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-s2cgz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-54m66 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-54m66 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-k99v6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-k99v6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-s2cgz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-313128 -- exec busybox-7dff88458-s2cgz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-313128 -v=7 --alsologtostderr
E0906 18:54:49.184055   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:49.190469   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:49.201938   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:49.223404   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:49.264876   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:49.346333   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:49.508616   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:49.830557   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:50.472346   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:51.753761   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:54.316102   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:59.437757   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-313128 -v=7 --alsologtostderr: (52.819225424s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-313128 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp testdata/cp-test.txt ha-313128:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128:/home/docker/cp-test.txt ha-313128-m02:/home/docker/cp-test_ha-313128_ha-313128-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m02 "sudo cat /home/docker/cp-test_ha-313128_ha-313128-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128:/home/docker/cp-test.txt ha-313128-m03:/home/docker/cp-test_ha-313128_ha-313128-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m03 "sudo cat /home/docker/cp-test_ha-313128_ha-313128-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128:/home/docker/cp-test.txt ha-313128-m04:/home/docker/cp-test_ha-313128_ha-313128-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m04 "sudo cat /home/docker/cp-test_ha-313128_ha-313128-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp testdata/cp-test.txt ha-313128-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m02 "sudo cat /home/docker/cp-test.txt"
E0906 18:55:09.679821   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m02:/home/docker/cp-test.txt ha-313128:/home/docker/cp-test_ha-313128-m02_ha-313128.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128 "sudo cat /home/docker/cp-test_ha-313128-m02_ha-313128.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m02:/home/docker/cp-test.txt ha-313128-m03:/home/docker/cp-test_ha-313128-m02_ha-313128-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m03 "sudo cat /home/docker/cp-test_ha-313128-m02_ha-313128-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m02:/home/docker/cp-test.txt ha-313128-m04:/home/docker/cp-test_ha-313128-m02_ha-313128-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m04 "sudo cat /home/docker/cp-test_ha-313128-m02_ha-313128-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp testdata/cp-test.txt ha-313128-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt ha-313128:/home/docker/cp-test_ha-313128-m03_ha-313128.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128 "sudo cat /home/docker/cp-test_ha-313128-m03_ha-313128.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt ha-313128-m02:/home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m02 "sudo cat /home/docker/cp-test_ha-313128-m03_ha-313128-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m03:/home/docker/cp-test.txt ha-313128-m04:/home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m04 "sudo cat /home/docker/cp-test_ha-313128-m03_ha-313128-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp testdata/cp-test.txt ha-313128-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2237225197/001/cp-test_ha-313128-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt ha-313128:/home/docker/cp-test_ha-313128-m04_ha-313128.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128 "sudo cat /home/docker/cp-test_ha-313128-m04_ha-313128.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt ha-313128-m02:/home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m02 "sudo cat /home/docker/cp-test_ha-313128-m04_ha-313128-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 cp ha-313128-m04:/home/docker/cp-test.txt ha-313128-m03:/home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 ssh -n ha-313128-m03 "sudo cat /home/docker/cp-test_ha-313128-m04_ha-313128-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.478536434s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (229.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-313128 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0906 19:16:44.178285   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-313128 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m48.969003168s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (229.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-313128 --control-plane -v=7 --alsologtostderr
E0906 19:19:47.250182   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:19:49.185049   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-313128 --control-plane -v=7 --alsologtostderr: (1m13.221996942s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-313128 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-175597 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-175597 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.973478038s)
--- PASS: TestJSONOutput/start/Command (58.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-175597 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-175597 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-175597 --output=json --user=testUser
E0906 19:21:44.178846   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-175597 --output=json --user=testUser: (7.364909748s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-811635 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-811635 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.738373ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aa8a068c-ef2a-40b7-a6de-cc5464af0edd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-811635] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb32d1f7-cadc-4bff-ae2d-acfc82d48fd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"bfa810ef-6335-4a72-aacd-6863ffce6a30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"acc7fec6-30ee-45da-bd16-75394cc29fe4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig"}}
	{"specversion":"1.0","id":"62b6417a-be08-4870-b282-fd28a24275eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube"}}
	{"specversion":"1.0","id":"32c2c210-e70f-4815-89a3-f08fd735a868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d43397b5-7d4a-4264-944f-75fea34630f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3e6a8e71-b391-46ce-8a28-e0aca373a2a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-811635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-811635
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-194185 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-194185 --driver=kvm2  --container-runtime=crio: (39.993614801s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-196751 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-196751 --driver=kvm2  --container-runtime=crio: (46.819528926s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-194185
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-196751
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-196751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-196751
helpers_test.go:175: Cleaning up "first-194185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-194185
--- PASS: TestMinikubeProfile (89.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-577531 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-577531 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.644156979s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-577531 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-577531 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-600777 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-600777 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.417122465s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600777 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600777 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-577531 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600777 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600777 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-600777
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-600777: (1.281534988s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-600777
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-600777: (22.215271747s)
--- PASS: TestMountStart/serial/RestartStopped (23.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600777 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600777 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-002640 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0906 19:24:49.185755   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-002640 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.495139614s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-002640 -- rollout status deployment/busybox: (4.549853882s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-6rcfw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-lmdp2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-6rcfw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-lmdp2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-6rcfw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-lmdp2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-6rcfw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-6rcfw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-lmdp2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-002640 -- exec busybox-7dff88458-lmdp2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-002640 -v 3 --alsologtostderr
E0906 19:26:44.178275   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-002640 -v 3 --alsologtostderr: (50.213966006s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-002640 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp testdata/cp-test.txt multinode-002640:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp multinode-002640:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3017084892/001/cp-test_multinode-002640.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp multinode-002640:/home/docker/cp-test.txt multinode-002640-m02:/home/docker/cp-test_multinode-002640_multinode-002640-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m02 "sudo cat /home/docker/cp-test_multinode-002640_multinode-002640-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp multinode-002640:/home/docker/cp-test.txt multinode-002640-m03:/home/docker/cp-test_multinode-002640_multinode-002640-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m03 "sudo cat /home/docker/cp-test_multinode-002640_multinode-002640-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp testdata/cp-test.txt multinode-002640-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp multinode-002640-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3017084892/001/cp-test_multinode-002640-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp multinode-002640-m02:/home/docker/cp-test.txt multinode-002640:/home/docker/cp-test_multinode-002640-m02_multinode-002640.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640 "sudo cat /home/docker/cp-test_multinode-002640-m02_multinode-002640.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp multinode-002640-m02:/home/docker/cp-test.txt multinode-002640-m03:/home/docker/cp-test_multinode-002640-m02_multinode-002640-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m03 "sudo cat /home/docker/cp-test_multinode-002640-m02_multinode-002640-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp testdata/cp-test.txt multinode-002640-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3017084892/001/cp-test_multinode-002640-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt multinode-002640:/home/docker/cp-test_multinode-002640-m03_multinode-002640.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640 "sudo cat /home/docker/cp-test_multinode-002640-m03_multinode-002640.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 cp multinode-002640-m03:/home/docker/cp-test.txt multinode-002640-m02:/home/docker/cp-test_multinode-002640-m03_multinode-002640-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 ssh -n multinode-002640-m02 "sudo cat /home/docker/cp-test_multinode-002640-m03_multinode-002640-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-002640 node stop m03: (1.496371818s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-002640 status: exit status 7 (419.697096ms)

                                                
                                                
-- stdout --
	multinode-002640
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-002640-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-002640-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-002640 status --alsologtostderr: exit status 7 (404.633487ms)

                                                
                                                
-- stdout --
	multinode-002640
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-002640-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-002640-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:27:39.683935   43253 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:27:39.684040   43253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:27:39.684049   43253 out.go:358] Setting ErrFile to fd 2...
	I0906 19:27:39.684053   43253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:27:39.684230   43253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:27:39.684391   43253 out.go:352] Setting JSON to false
	I0906 19:27:39.684414   43253 mustload.go:65] Loading cluster: multinode-002640
	I0906 19:27:39.684453   43253 notify.go:220] Checking for updates...
	I0906 19:27:39.684771   43253 config.go:182] Loaded profile config "multinode-002640": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:27:39.684784   43253 status.go:255] checking status of multinode-002640 ...
	I0906 19:27:39.685189   43253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:27:39.685257   43253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:27:39.704776   43253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0906 19:27:39.705192   43253 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:27:39.705699   43253 main.go:141] libmachine: Using API Version  1
	I0906 19:27:39.705724   43253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:27:39.706036   43253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:27:39.706205   43253 main.go:141] libmachine: (multinode-002640) Calling .GetState
	I0906 19:27:39.707789   43253 status.go:330] multinode-002640 host status = "Running" (err=<nil>)
	I0906 19:27:39.707807   43253 host.go:66] Checking if "multinode-002640" exists ...
	I0906 19:27:39.708191   43253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:27:39.708230   43253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:27:39.722666   43253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0906 19:27:39.723017   43253 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:27:39.723419   43253 main.go:141] libmachine: Using API Version  1
	I0906 19:27:39.723436   43253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:27:39.723685   43253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:27:39.723837   43253 main.go:141] libmachine: (multinode-002640) Calling .GetIP
	I0906 19:27:39.726303   43253 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:27:39.726675   43253 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:27:39.726701   43253 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:27:39.726822   43253 host.go:66] Checking if "multinode-002640" exists ...
	I0906 19:27:39.727204   43253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:27:39.727277   43253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:27:39.741599   43253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36873
	I0906 19:27:39.741978   43253 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:27:39.742363   43253 main.go:141] libmachine: Using API Version  1
	I0906 19:27:39.742384   43253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:27:39.742623   43253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:27:39.742776   43253 main.go:141] libmachine: (multinode-002640) Calling .DriverName
	I0906 19:27:39.742917   43253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:27:39.742944   43253 main.go:141] libmachine: (multinode-002640) Calling .GetSSHHostname
	I0906 19:27:39.745356   43253 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:27:39.745697   43253 main.go:141] libmachine: (multinode-002640) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:68:e3", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:24:48 +0000 UTC Type:0 Mac:52:54:00:5c:68:e3 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:multinode-002640 Clientid:01:52:54:00:5c:68:e3}
	I0906 19:27:39.745722   43253 main.go:141] libmachine: (multinode-002640) DBG | domain multinode-002640 has defined IP address 192.168.39.11 and MAC address 52:54:00:5c:68:e3 in network mk-multinode-002640
	I0906 19:27:39.745820   43253 main.go:141] libmachine: (multinode-002640) Calling .GetSSHPort
	I0906 19:27:39.745991   43253 main.go:141] libmachine: (multinode-002640) Calling .GetSSHKeyPath
	I0906 19:27:39.746123   43253 main.go:141] libmachine: (multinode-002640) Calling .GetSSHUsername
	I0906 19:27:39.746264   43253 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640/id_rsa Username:docker}
	I0906 19:27:39.828120   43253 ssh_runner.go:195] Run: systemctl --version
	I0906 19:27:39.834169   43253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:27:39.849464   43253 kubeconfig.go:125] found "multinode-002640" server: "https://192.168.39.11:8443"
	I0906 19:27:39.849497   43253 api_server.go:166] Checking apiserver status ...
	I0906 19:27:39.849535   43253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:27:39.863528   43253 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1067/cgroup
	W0906 19:27:39.873622   43253 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1067/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0906 19:27:39.873669   43253 ssh_runner.go:195] Run: ls
	I0906 19:27:39.877988   43253 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I0906 19:27:39.882082   43253 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I0906 19:27:39.882107   43253 status.go:422] multinode-002640 apiserver status = Running (err=<nil>)
	I0906 19:27:39.882115   43253 status.go:257] multinode-002640 status: &{Name:multinode-002640 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:27:39.882130   43253 status.go:255] checking status of multinode-002640-m02 ...
	I0906 19:27:39.882420   43253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:27:39.882483   43253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:27:39.897404   43253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36831
	I0906 19:27:39.897780   43253 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:27:39.898261   43253 main.go:141] libmachine: Using API Version  1
	I0906 19:27:39.898281   43253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:27:39.898567   43253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:27:39.898913   43253 main.go:141] libmachine: (multinode-002640-m02) Calling .GetState
	I0906 19:27:39.900368   43253 status.go:330] multinode-002640-m02 host status = "Running" (err=<nil>)
	I0906 19:27:39.900386   43253 host.go:66] Checking if "multinode-002640-m02" exists ...
	I0906 19:27:39.900656   43253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:27:39.900688   43253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:27:39.915832   43253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39019
	I0906 19:27:39.916165   43253 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:27:39.916586   43253 main.go:141] libmachine: Using API Version  1
	I0906 19:27:39.916605   43253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:27:39.916876   43253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:27:39.917088   43253 main.go:141] libmachine: (multinode-002640-m02) Calling .GetIP
	I0906 19:27:39.919706   43253 main.go:141] libmachine: (multinode-002640-m02) DBG | domain multinode-002640-m02 has defined MAC address 52:54:00:2b:1e:d7 in network mk-multinode-002640
	I0906 19:27:39.920045   43253 main.go:141] libmachine: (multinode-002640-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:1e:d7", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:25:55 +0000 UTC Type:0 Mac:52:54:00:2b:1e:d7 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:multinode-002640-m02 Clientid:01:52:54:00:2b:1e:d7}
	I0906 19:27:39.920064   43253 main.go:141] libmachine: (multinode-002640-m02) DBG | domain multinode-002640-m02 has defined IP address 192.168.39.12 and MAC address 52:54:00:2b:1e:d7 in network mk-multinode-002640
	I0906 19:27:39.920211   43253 host.go:66] Checking if "multinode-002640-m02" exists ...
	I0906 19:27:39.920524   43253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:27:39.920566   43253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:27:39.934981   43253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46061
	I0906 19:27:39.935465   43253 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:27:39.935894   43253 main.go:141] libmachine: Using API Version  1
	I0906 19:27:39.935915   43253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:27:39.936183   43253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:27:39.936331   43253 main.go:141] libmachine: (multinode-002640-m02) Calling .DriverName
	I0906 19:27:39.936517   43253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:27:39.936534   43253 main.go:141] libmachine: (multinode-002640-m02) Calling .GetSSHHostname
	I0906 19:27:39.939088   43253 main.go:141] libmachine: (multinode-002640-m02) DBG | domain multinode-002640-m02 has defined MAC address 52:54:00:2b:1e:d7 in network mk-multinode-002640
	I0906 19:27:39.939487   43253 main.go:141] libmachine: (multinode-002640-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:1e:d7", ip: ""} in network mk-multinode-002640: {Iface:virbr1 ExpiryTime:2024-09-06 20:25:55 +0000 UTC Type:0 Mac:52:54:00:2b:1e:d7 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:multinode-002640-m02 Clientid:01:52:54:00:2b:1e:d7}
	I0906 19:27:39.939517   43253 main.go:141] libmachine: (multinode-002640-m02) DBG | domain multinode-002640-m02 has defined IP address 192.168.39.12 and MAC address 52:54:00:2b:1e:d7 in network mk-multinode-002640
	I0906 19:27:39.939643   43253 main.go:141] libmachine: (multinode-002640-m02) Calling .GetSSHPort
	I0906 19:27:39.939839   43253 main.go:141] libmachine: (multinode-002640-m02) Calling .GetSSHKeyPath
	I0906 19:27:39.939997   43253 main.go:141] libmachine: (multinode-002640-m02) Calling .GetSSHUsername
	I0906 19:27:39.940099   43253 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19576-6021/.minikube/machines/multinode-002640-m02/id_rsa Username:docker}
	I0906 19:27:40.016001   43253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:27:40.030066   43253 status.go:257] multinode-002640-m02 status: &{Name:multinode-002640-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:27:40.030102   43253 status.go:255] checking status of multinode-002640-m03 ...
	I0906 19:27:40.030436   43253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0906 19:27:40.030483   43253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0906 19:27:40.045403   43253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38919
	I0906 19:27:40.045851   43253 main.go:141] libmachine: () Calling .GetVersion
	I0906 19:27:40.046328   43253 main.go:141] libmachine: Using API Version  1
	I0906 19:27:40.046348   43253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0906 19:27:40.046680   43253 main.go:141] libmachine: () Calling .GetMachineName
	I0906 19:27:40.046872   43253 main.go:141] libmachine: (multinode-002640-m03) Calling .GetState
	I0906 19:27:40.048353   43253 status.go:330] multinode-002640-m03 host status = "Stopped" (err=<nil>)
	I0906 19:27:40.048367   43253 status.go:343] host is not running, skipping remaining checks
	I0906 19:27:40.048375   43253 status.go:257] multinode-002640-m03 status: &{Name:multinode-002640-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 node start m03 -v=7 --alsologtostderr
E0906 19:27:52.253262   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-002640 node start m03 -v=7 --alsologtostderr: (35.787003644s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-002640 node delete m03: (1.683574666s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (184.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-002640 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0906 19:36:27.253479   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:36:44.179082   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-002640 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m3.614536116s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-002640 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (184.14s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-002640
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-002640-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-002640-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (64.680215ms)

                                                
                                                
-- stdout --
	* [multinode-002640-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-002640-m02' is duplicated with machine name 'multinode-002640-m02' in profile 'multinode-002640'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-002640-m03 --driver=kvm2  --container-runtime=crio
E0906 19:39:49.185063   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-002640-m03 --driver=kvm2  --container-runtime=crio: (42.780283142s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-002640
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-002640: exit status 80 (197.041968ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-002640 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-002640-m03 already exists in multinode-002640-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-002640-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.06s)

                                                
                                    
x
+
TestScheduledStopUnix (112.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-335480 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-335480 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.320682048s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-335480 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-335480 -n scheduled-stop-335480
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-335480 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-335480 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-335480 -n scheduled-stop-335480
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-335480
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-335480 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0906 19:44:32.257158   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-335480
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-335480: exit status 7 (63.90724ms)

                                                
                                                
-- stdout --
	scheduled-stop-335480
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-335480 -n scheduled-stop-335480
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-335480 -n scheduled-stop-335480: exit status 7 (63.261027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-335480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-335480
--- PASS: TestScheduledStopUnix (112.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (195.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1425555478 start -p running-upgrade-952957 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0906 19:44:49.184756   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1425555478 start -p running-upgrade-952957 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m11.046695851s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-952957 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-952957 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.667515206s)
helpers_test.go:175: Cleaning up "running-upgrade-952957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-952957
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-952957: (1.143388166s)
--- PASS: TestRunningBinaryUpgrade (195.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-944227 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-944227 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.077111ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-944227] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-944227 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-944227 --driver=kvm2  --container-runtime=crio: (1m40.607772651s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-944227 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-603826 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-603826 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (109.209387ms)

                                                
                                                
-- stdout --
	* [false-603826] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:46:01.943578   51746 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:46:01.943824   51746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:46:01.943833   51746 out.go:358] Setting ErrFile to fd 2...
	I0906 19:46:01.943838   51746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:46:01.944512   51746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-6021/.minikube/bin
	I0906 19:46:01.945574   51746 out.go:352] Setting JSON to false
	I0906 19:46:01.946686   51746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5311,"bootTime":1725646651,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0906 19:46:01.946768   51746 start.go:139] virtualization: kvm guest
	I0906 19:46:01.948930   51746 out.go:177] * [false-603826] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0906 19:46:01.950479   51746 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:46:01.950499   51746 notify.go:220] Checking for updates...
	I0906 19:46:01.952911   51746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:46:01.954168   51746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-6021/kubeconfig
	I0906 19:46:01.955296   51746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-6021/.minikube
	I0906 19:46:01.956461   51746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0906 19:46:01.957526   51746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:46:01.959261   51746 config.go:182] Loaded profile config "NoKubernetes-944227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0906 19:46:01.959428   51746 config.go:182] Loaded profile config "kubernetes-upgrade-959423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0906 19:46:01.959560   51746 config.go:182] Loaded profile config "running-upgrade-952957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0906 19:46:01.959700   51746 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:46:01.998416   51746 out.go:177] * Using the kvm2 driver based on user configuration
	I0906 19:46:01.999635   51746 start.go:297] selected driver: kvm2
	I0906 19:46:01.999652   51746 start.go:901] validating driver "kvm2" against <nil>
	I0906 19:46:01.999666   51746 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:46:02.001702   51746 out.go:201] 
	W0906 19:46:02.002901   51746 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0906 19:46:02.004027   51746 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-603826 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-603826" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-603826

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603826"

                                                
                                                
----------------------- debugLogs end: false-603826 [took: 2.879947517s] --------------------------------
helpers_test.go:175: Cleaning up "false-603826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-603826
--- PASS: TestNetworkPlugins/group/false (3.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (40.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-944227 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0906 19:46:44.178797   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-944227 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.082911379s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-944227 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-944227 status -o json: exit status 2 (242.467232ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-944227","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-944227
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-944227: (1.067028565s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (40.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-944227 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-944227 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.526179456s)
--- PASS: TestNoKubernetes/serial/Start (28.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (150.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.756812838 start -p stopped-upgrade-098096 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.756812838 start -p stopped-upgrade-098096 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m3.353320184s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.756812838 -p stopped-upgrade-098096 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.756812838 -p stopped-upgrade-098096 stop: (1.452357681s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-098096 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-098096 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m25.348189673s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (150.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-944227 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-944227 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.643538ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-944227
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-944227: (1.292702317s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-944227 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-944227 --driver=kvm2  --container-runtime=crio: (42.046480366s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-944227 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-944227 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.725823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-098096
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestPause/serial/Start (76.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-306799 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0906 19:49:49.184722   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-306799 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m16.941687861s)
--- PASS: TestPause/serial/Start (76.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (116.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m56.229582493s)
--- PASS: TestNetworkPlugins/group/auto/Start (116.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m5.401812028s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (99.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m39.853594684s)
--- PASS: TestNetworkPlugins/group/calico/Start (99.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-603826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-603826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kckc7" [93f2373c-becc-48a1-9389-5f568d661a5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kckc7" [93f2373c-becc-48a1-9389-5f568d661a5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003858654s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-603826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (93.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m33.024258961s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (93.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-x5pc6" [dc3ab260-3719-416f-971e-94d52b895f97] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004367216s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-603826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-603826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pzht2" [faed9da3-80f5-478c-b799-981cc4f7a8f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pzht2" [faed9da3-80f5-478c-b799-981cc4f7a8f0] Running
E0906 19:53:07.255448   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004678305s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-603826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (56.944408303s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m29.076314501s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-mnblh" [2af3a5f8-c7b1-4532-bf0e-4a4c72427352] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005344929s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-603826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-603826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4krhk" [996c2fbb-acad-4b83-93e6-16e4951f646e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4krhk" [996c2fbb-acad-4b83-93e6-16e4951f646e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004623304s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-603826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-603826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-603826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pkskb" [64a32805-34f5-40a7-8524-c442b5130720] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pkskb" [64a32805-34f5-40a7-8524-c442b5130720] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00421298s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-603826 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m41.727193789s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-603826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-603826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-603826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wd5fj" [4422a0fa-75d0-43e0-989a-77730a9937a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wd5fj" [4422a0fa-75d0-43e0-989a-77730a9937a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00448375s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-603826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (94.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-504385 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-504385 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m34.977062255s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (94.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gp469" [7a2fcc36-714d-4073-bf61-427649678671] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004719663s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-603826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-603826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dqrm8" [aedebbb5-c4bf-40f2-a251-6c24b3c101e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dqrm8" [aedebbb5-c4bf-40f2-a251-6c24b3c101e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003526036s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-603826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-458066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-458066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m2.579976936s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-603826 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-603826 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-djvk5" [b5fc527c-b505-4a37-b387-93571cc9b227] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-djvk5" [b5fc527c-b505-4a37-b387-93571cc9b227] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004155623s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-603826 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-603826 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0906 20:24:58.425262   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-653828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-653828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (56.201074197s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-504385 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [05b71860-303a-4ca5-880a-bfa71ac2c956] Pending
helpers_test.go:344: "busybox" [05b71860-303a-4ca5-880a-bfa71ac2c956] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [05b71860-303a-4ca5-880a-bfa71ac2c956] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004593157s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-504385 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-458066 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e7af9141-8e64-472b-9147-6ff112819513] Pending
helpers_test.go:344: "busybox" [e7af9141-8e64-472b-9147-6ff112819513] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e7af9141-8e64-472b-9147-6ff112819513] Running
E0906 19:56:44.178422   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/addons-959832/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004650802s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-458066 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-504385 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-504385 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-458066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-458066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-653828 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [81f8f398-5491-4465-90e2-1312472db33c] Pending
E0906 19:57:17.617739   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/auto-603826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [81f8f398-5491-4465-90e2-1312472db33c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [81f8f398-5491-4465-90e2-1312472db33c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003949657s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-653828 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-653828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-653828 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (686.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-504385 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-504385 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m26.537240277s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-504385 -n no-preload-504385
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (686.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (604.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-458066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-458066 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m4.237591849s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-458066 -n embed-certs-458066
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (604.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (585.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-653828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0906 19:59:58.425332   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:58.431754   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:58.443195   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:58.464647   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:58.506136   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:58.587620   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:58.749167   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:59.070895   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:59:59.712241   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:00.994288   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:03.555938   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:04.339436   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:08.678264   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:18.920578   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:20.151527   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/custom-flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:37.238991   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/kindnet-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:39.402366   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/flannel-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:45.301027   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/enable-default-cni-603826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-653828 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m45.18079944s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653828 -n default-k8s-diff-port-653828
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (585.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-843298 --alsologtostderr -v=3
E0906 20:00:50.866628   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:50.872987   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:50.884345   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:50.905762   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:50.947192   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:51.029465   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:51.191080   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:51.512909   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:52.154989   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
E0906 20:00:53.436846   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/bridge-603826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-843298 --alsologtostderr -v=3: (5.477892746s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-843298 -n old-k8s-version-843298: exit status 7 (64.012885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-843298 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-113806 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0906 20:24:49.184060   13178 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-6021/.minikube/profiles/functional-206035/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-113806 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (47.896400804s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-113806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-113806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025546571s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-113806 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-113806 --alsologtostderr -v=3: (10.535413233s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-113806 -n newest-cni-113806
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-113806 -n newest-cni-113806: exit status 7 (65.35762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-113806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-113806 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-113806 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (36.473398859s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-113806 -n newest-cni-113806
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-113806 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-113806 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-113806 -n newest-cni-113806
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-113806 -n newest-cni-113806: exit status 2 (337.885282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-113806 -n newest-cni-113806
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-113806 -n newest-cni-113806: exit status 2 (365.597546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-113806 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-113806 -n newest-cni-113806
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-113806 -n newest-cni-113806
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.52s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
149 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
255 TestNetworkPlugins/group/kubenet 4.37
263 TestNetworkPlugins/group/cilium 3.24
281 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-603826 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-603826" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-603826

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603826"

                                                
                                                
----------------------- debugLogs end: kubenet-603826 [took: 4.229853024s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-603826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-603826
--- SKIP: TestNetworkPlugins/group/kubenet (4.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-603826 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-603826" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-603826

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-603826" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603826"

                                                
                                                
----------------------- debugLogs end: cilium-603826 [took: 3.103763748s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-603826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-603826
--- SKIP: TestNetworkPlugins/group/cilium (3.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-859361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-859361
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard